The hybrid ARIMA-LSTM model is open to a variety of experimentation. For ideal performance, a balance must be reached between the levels of volatility that work best for the ARIMA and LSTM models. Using shorter MA periods that result in a non-mesokurtic distribution may achieve a better volatility balance between models.
import pandas as pd
pd.set_option('display.max_rows', 500)
import timeit
!pip install -q -U keras-tuner
|████████████████████████████████| 98 kB 4.1 MB/s
import keras_tuner as kt
!pip install pmdarima
Collecting pmdarima
Downloading pmdarima-1.8.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (1.4 MB)
|████████████████████████████████| 1.4 MB 5.1 MB/s
Collecting statsmodels!=0.12.0,>=0.11
Downloading statsmodels-0.13.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (9.8 MB)
|████████████████████████████████| 9.8 MB 28.3 MB/s
Requirement already satisfied: pandas>=0.19 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.5)
Requirement already satisfied: urllib3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.24.3)
Requirement already satisfied: setuptools!=50.0.0,>=38.6.0 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (57.4.0)
Requirement already satisfied: numpy>=1.19.3 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.19.5)
Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.4.1)
Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.0.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (1.1.0)
Requirement already satisfied: Cython!=0.29.18,>=0.29 in /usr/local/lib/python3.7/dist-packages (from pmdarima) (0.29.24)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.19->pmdarima) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas>=0.19->pmdarima) (1.15.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22->pmdarima) (3.0.0)
Requirement already satisfied: patsy>=0.5.2 in /usr/local/lib/python3.7/dist-packages (from statsmodels!=0.12.0,>=0.11->pmdarima) (0.5.2)
Installing collected packages: statsmodels, pmdarima
Attempting uninstall: statsmodels
Found existing installation: statsmodels 0.10.2
Uninstalling statsmodels-0.10.2:
Successfully uninstalled statsmodels-0.10.2
Successfully installed pmdarima-1.8.4 statsmodels-0.13.1
import pmdarima
url = 'https://launchpad.net/~mario-mariomedina/+archive/ubuntu/talib/+files'
!wget $url/libta-lib0_0.4.0-oneiric1_amd64.deb -qO libta.deb
!wget $url/ta-lib0-dev_0.4.0-oneiric1_amd64.deb -qO ta.deb
!dpkg -i libta.deb ta.deb
!pip install ta-lib
import talib
Selecting previously unselected package libta-lib0.
(Reading database ... 155222 files and directories currently installed.)
Preparing to unpack libta.deb ...
Unpacking libta-lib0 (0.4.0-oneiric1) ...
Selecting previously unselected package ta-lib0-dev.
Preparing to unpack ta.deb ...
Unpacking ta-lib0-dev (0.4.0-oneiric1) ...
Setting up libta-lib0 (0.4.0-oneiric1) ...
Setting up ta-lib0-dev (0.4.0-oneiric1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.3) ...
/sbin/ldconfig.real: /usr/local/lib/python3.7/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link
Collecting ta-lib
Downloading TA-Lib-0.4.22.tar.gz (268 kB)
|████████████████████████████████| 268 kB 5.2 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from ta-lib) (1.19.5)
Building wheels for collected packages: ta-lib
Building wheel for ta-lib (PEP 517) ... done
Created wheel for ta-lib: filename=TA_Lib-0.4.22-cp37-cp37m-linux_x86_64.whl size=1465698 sha256=e0ee59ea3a3a2be9f8d64677fdd5cc7390a01e917590faa1e5d572749cb1727b
Stored in directory: /root/.cache/pip/wheels/7b/63/a9/144081748d9c4f0a09b4670c7b3c414bcb33ff97f0724c753a
Successfully built ta-lib
Installing collected packages: ta-lib
Successfully installed ta-lib-0.4.22
import tensorflow
import statsmodels.tsa.api
import keras
import sklearn
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, Bidirectional,BatchNormalization, Embedding, TimeDistributed, LeakyReLU, GRU
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Activation, Dropout
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
from keras.callbacks import ModelCheckpoint,EarlyStopping
from keras.regularizers import l1_l2
import math
from statsmodels.tsa.api import VAR
from statsmodels.tsa.statespace.varmax import VARMAX,VARMAXResults
from sklearn.metrics import mean_squared_error, mean_absolute_percentage_error, mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from matplotlib import pyplot
import json
import datetime
import pandas as pd
import numpy as np
import os
from scipy.stats import kurtosis
import pmdarima as pm
from pmdarima import auto_arima
from talib import abstract
import json
import matplotlib.pyplot as plt
# plt.rcParams.update({'font.size': 16})
from matplotlib.pyplot import figure
from numpy import array
from numpy import hstack
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
from keras.utils.generic_utils import get_custom_objects
from tensorflow.keras.utils import plot_model
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning
warnings.simplefilter('ignore', ConvergenceWarning)
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
cd drive/MyDrive/Stock price prediction/Generated datasets
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Generated datasets
df = pd.read_csv("FULL_Data_google_COVID_bull_bear.csv",parse_dates=[0])
df.tail(10)
| Unnamed: 0 | Unnamed: 0.1 | Unnamed: 0.1.1 | Unnamed: 0.1.1.1 | Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1592 | 1592 | 1781 | 1781 | 1781 | 150.199997 | 151.429993 | 150.059998 | 150.809998 | 150.809998 | 56787900.0 | 150.565717 | 148.423811 | -1.137777 | 2.817933 | 154.059677 | 142.787944 | 150.767809 | 5.009368 | 93.428749 | -0.061228 | 100.779503 | -0.039111 | 103.599003 | -0.022436 | 2021-11-09 | 19 | 112313 | 1258 | 0.119141 | 0.111328 | NaN | NaN | NaN | NaN |
| 1593 | 1593 | 1782 | 1782 | 1782 | 150.020004 | 150.130005 | 147.850006 | 147.919998 | 147.919998 | 65187100.0 | 150.417145 | 148.729049 | -1.236913 | 2.144358 | 153.017766 | 144.440332 | 148.869268 | 4.989888 | 92.922909 | -0.061683 | 99.694365 | -0.039762 | 101.872301 | -0.022657 | 2021-11-10 | 19 | 80301 | 1470 | 0.154297 | 0.109375 | NaN | NaN | NaN | NaN |
| 1594 | 1594 | 1783 | 1783 | 1783 | 148.960007 | 149.429993 | 147.679993 | 147.869995 | 147.869995 | 41000000.0 | 150.110001 | 149.060477 | -1.165047 | 1.767475 | 152.595428 | 145.525526 | 148.203086 | 4.989548 | 92.416471 | -0.062129 | 98.604584 | -0.040391 | 100.137594 | -0.022839 | 2021-11-11 | 19 | 94975 | 1662 | 0.102845 | 0.126915 | NaN | NaN | NaN | NaN |
| 1595 | 1595 | 1784 | 1784 | 1784 | 148.429993 | 150.399994 | 147.479996 | 149.990005 | 149.990005 | 63632600.0 | 149.895715 | 149.357144 | -0.869308 | 1.420732 | 152.198608 | 146.515681 | 149.394365 | 5.003879 | 91.909483 | -0.062566 | 97.510555 | -0.040998 | 98.396260 | -0.022980 | 2021-11-12 | 19 | 55499 | 797 | 0.157277 | 0.080595 | NaN | NaN | NaN | NaN |
| 1596 | 1596 | 1785 | 1785 | 1785 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2021-11-13 | 19 | 146529 | 2505 | 0.139459 | 0.083243 | NaN | NaN | NaN | NaN |
| 1597 | 1597 | 1786 | 1786 | 1786 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 2021-11-14 | 19 | 40964 | 479 | 0.151261 | 0.100840 | NaN | NaN | NaN | NaN |
| 1598 | 1598 | 1787 | 1787 | 1787 | 150.369995 | 151.880005 | 149.429993 | 150.000000 | 150.000000 | 59222800.0 | 149.758571 | 149.602859 | -0.907641 | 1.229694 | 152.062246 | 147.143471 | 149.798122 | 5.003946 | 91.401994 | -0.062993 | 96.412672 | -0.041581 | 96.649685 | -0.023077 | 2021-11-15 | 22 | 30290 | 148 | 0.136737 | 0.109389 | NaN | NaN | NaN | NaN |
| 1599 | 1599 | 1788 | 1788 | 1788 | 149.940002 | 151.490005 | 149.339996 | 151.000000 | 151.000000 | 59256200.0 | 149.718571 | 149.814763 | -0.791320 | 1.236243 | 152.287250 | 147.342277 | 150.599374 | 5.010635 | 90.894052 | -0.063410 | 95.311334 | -0.042140 | 94.899260 | -0.023130 | 2021-11-16 | 22 | 138962 | 1294 | 0.135531 | 0.115385 | NaN | NaN | NaN | NaN |
| 1600 | 1600 | 1789 | 1789 | 1789 | 151.000000 | 155.000000 | 150.990005 | 153.490005 | 153.490005 | 88807000.0 | 150.154286 | 150.040002 | -0.657719 | 1.467121 | 152.974245 | 147.105759 | 152.526461 | 5.027099 | 90.385704 | -0.063817 | 94.206941 | -0.042673 | 93.146378 | -0.023135 | 2021-11-17 | 22 | 87626 | 1290 | 0.100870 | 0.126957 | NaN | NaN | NaN | NaN |
| 1601 | 1601 | 1790 | 1790 | 1790 | 153.710007 | 158.669998 | 153.050003 | 157.869995 | 157.869995 | 137659100.0 | 151.162857 | 150.450002 | -0.609656 | 2.267825 | 154.985653 | 145.914351 | 156.088817 | 5.055417 | 89.877000 | -0.064214 | 93.099895 | -0.043179 | 91.392433 | -0.023090 | 2021-11-18 | 22 | 111404 | 1637 | 0.145098 | 0.121569 | NaN | NaN | NaN | NaN |
ls
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory 'AG Project NB'/ 'Akshada Notebooks'/ 'Archana - LSTM Hybrid'/ dataset_final_tech_ind_sentiment_score.csv DL_FinalProject_GoogleTrends_Akshada.ipynb "Experiment NB's"/ 'Generated datasets'/ 'PLOTS Akshada'/ Reports/ results_updated.xlsx results.xlsx 'Stock Closing Price Prediction.pptx' Stocks/ 'Training data'/
cd Archana - LSTM Hybrid/Outputs/full
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/full
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name().head(5)
0 Saturday 1 Sunday 3 Tuesday 7 Saturday 8 Sunday Name: Date, dtype: object
len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
497
len(df)
1602
len(df) - len(pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name())
1105
df.dropna(inplace=True)
len(df)
1080
pd.to_datetime(df[np.isnan(df.Close)==True]['Date']).dt.day_name()
Series([], Name: Date, dtype: object)
df.head(5)
| Unnamed: 0 | Unnamed: 0.1 | Unnamed: 0.1.1 | Unnamed: 0.1.1.1 | Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2 | 2 | 191 | 191 | 191 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 2017-07-03 | 15 | 0 | 0 | 0.666667 | 0.000000 | 0.142778 | 0.146810 | 0.100537 | 0.099251 |
| 4 | 4 | 193 | 193 | 193 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 2017-07-05 | 15 | 0 | 0 | 0.400000 | 0.000000 | 0.144487 | 0.145833 | 0.100630 | 0.096361 |
| 5 | 5 | 194 | 194 | 194 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 2017-07-06 | 15 | 0 | 0 | 0.142857 | 0.142857 | 0.145346 | 0.145164 | 0.100672 | 0.094761 |
| 6 | 6 | 195 | 195 | 195 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 2017-07-07 | 15 | 0 | 0 | 0.333333 | 0.000000 | 0.146208 | 0.144377 | 0.100711 | 0.093072 |
| 9 | 9 | 198 | 198 | 198 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 2017-07-10 | 14 | 0 | 0 | 0.000000 | 0.000000 | 0.148802 | 0.141354 | 0.100808 | 0.087587 |
stock_col= list(df.columns)
stock_col = stock_col[4:len(stock_col)]
dataset_final = df[stock_col]
dataset_final.head(5)
| Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 2017-07-03 | 15 | 0 | 0 | 0.666667 | 0.000000 | 0.142778 | 0.146810 | 0.100537 | 0.099251 |
| 4 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 2017-07-05 | 15 | 0 | 0 | 0.400000 | 0.000000 | 0.144487 | 0.145833 | 0.100630 | 0.096361 |
| 5 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 2017-07-06 | 15 | 0 | 0 | 0.142857 | 0.142857 | 0.145346 | 0.145164 | 0.100672 | 0.094761 |
| 6 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 2017-07-07 | 15 | 0 | 0 | 0.333333 | 0.000000 | 0.146208 | 0.144377 | 0.100711 | 0.093072 |
| 9 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 2017-07-10 | 14 | 0 | 0 | 0.000000 | 0.000000 | 0.148802 | 0.141354 | 0.100808 | 0.087587 |
# stock_col= list(df.columns)
# stock_col1 = stock_col[4:len(stock_col)-9]
# stock_col2 = stock_col[len(stock_col)-7:len(stock_col)]
# stock_col1.append(stock_col2)
# dataset_final = df
dataset_final.head(5)
| Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | Date | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 2017-07-03 | 15 | 0 | 0 | 0.666667 | 0.000000 | 0.142778 | 0.146810 | 0.100537 | 0.099251 |
| 4 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 2017-07-05 | 15 | 0 | 0 | 0.400000 | 0.000000 | 0.144487 | 0.145833 | 0.100630 | 0.096361 |
| 5 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 2017-07-06 | 15 | 0 | 0 | 0.142857 | 0.142857 | 0.145346 | 0.145164 | 0.100672 | 0.094761 |
| 6 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 2017-07-07 | 15 | 0 | 0 | 0.333333 | 0.000000 | 0.146208 | 0.144377 | 0.100711 | 0.093072 |
| 9 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 2017-07-10 | 14 | 0 | 0 | 0.000000 | 0.000000 | 0.148802 | 0.141354 | 0.100808 | 0.087587 |
# Set the date to datetime data
datetime_series = pd.to_datetime(dataset_final['Date'])
datetime_index = pd.DatetimeIndex(datetime_series.values)
dataset_final = dataset_final.set_index(datetime_index)
dataset_final = dataset_final.sort_values(by='Date')
dataset_final = dataset_final.drop(columns='Date')
dataset_final.head(5)
| Open | High | Low | Close | Adj Close | Volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2017-07-03 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 15 | 0 | 0 | 0.666667 | 0.000000 | 0.142778 | 0.146810 | 0.100537 | 0.099251 |
| 2017-07-05 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 15 | 0 | 0 | 0.400000 | 0.000000 | 0.144487 | 0.145833 | 0.100630 | 0.096361 |
| 2017-07-06 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 15 | 0 | 0 | 0.142857 | 0.142857 | 0.145346 | 0.145164 | 0.100672 | 0.094761 |
| 2017-07-07 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 15 | 0 | 0 | 0.333333 | 0.000000 | 0.146208 | 0.144377 | 0.100711 | 0.093072 |
| 2017-07-10 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 14 | 0 | 0 | 0.000000 | 0.000000 | 0.148802 | 0.141354 | 0.100808 | 0.087587 |
# Get features and target
X_value = pd.DataFrame(dataset_final.iloc[:, :])
y_value = pd.DataFrame(dataset_final.iloc[:, 3])
y_value.head(5)
| Close | |
|---|---|
| 2017-07-03 | 35.875000 |
| 2017-07-05 | 36.022499 |
| 2017-07-06 | 35.682499 |
| 2017-07-07 | 36.044998 |
| 2017-07-10 | 36.264999 |
# Normalized the data
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
MinMaxScaler(feature_range=(-1, 1))
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
X_scale_dataset.shape, y_scale_dataset.shape,
((1080, 29), (1080, 1))
X_value.shape[1]
29
n_steps_in = 3
n_features = X_value.shape[1] #19 features
n_steps_out = 1
# Reshape the data
'''Set the data input steps and output steps,
we use 30 days data to predict 1 day price here,
reshape it to (None, input_step, number of features) used for LSTM input'''
# Get X/y dataset
def get_X_y(X_data, y_data):
X = list()
y = list()
yc = list()
length = len(X_data)
for i in range(0, length, 1):
# pdb.set_trace()
X_value = X_data[i: i + n_steps_in][:, :]
# print('[',i,': ',i,' + ',n_steps_in,'][:, :]')
y_value = y_data[i + n_steps_in: i + (n_steps_in + n_steps_out)][:, 0]
# print('[',i,' + ',n_steps_in,': ',i,' + (',n_steps_in,' + ',n_steps_out,')][:, 0]')
yc_value = y_data[i: i + n_steps_in][:, :]
if len(X_value) == 3 and len(y_value) == 1:
X.append(X_value)
y.append(y_value)
yc.append(yc_value)
return np.array(X), np.array(y), np.array(yc)
# get the train test predict index
def predict_index(dataset, X_train, n_steps_in, n_steps_out):
# get the predict data (remove the in_steps days)
train_predict_index = dataset.iloc[n_steps_in : X_train.shape[0] + n_steps_in + n_steps_out - 1, :].index
test_predict_index = dataset.iloc[X_train.shape[0] + n_steps_in:, :].index
return train_predict_index, test_predict_index
def mean_absolute_percentage_error(actual, prediction):
actual = pd.Series(actual)
prediction = pd.Series(prediction)
return 100 * np.mean(np.abs((actual - prediction))/actual)
# Split train/test dataset
def split_train_test(data):
train_size = round(len(X) * 0.75)
data_train = data[0:train_size]
data_test = data[train_size:]
return data_train, data_test
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
yc_train, yc_test, = split_train_test(yc)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# %% --------------------------------------- Save dataset -----------------------------------------------------------------
print('X shape: ', X.shape)
print('y shape: ', y.shape)
print('X_train shape: ', X_train.shape)
print('y_train shape: ', y_train.shape)
print('y_c_train shape: ', yc_train.shape)
print('X_test shape: ', X_test.shape)
print('y_test shape: ', y_test.shape)
print('y_c_test shape: ', yc_test.shape)
print('index_train shape:', index_train.shape)
print('index_test shape:', index_test.shape)
X shape: (1077, 3, 29) y shape: (1077, 1) X_train shape: (808, 3, 29) y_train shape: (808, 1) y_c_train shape: (808, 3, 1) X_test shape: (269, 3, 29) y_test shape: (269, 1) y_c_test shape: (269, 3, 1) index_train shape: (808,) index_test shape: (269,)
output_dim = y_train.shape[1]
output_dim
1
df = dataset_final.copy()
df.rename(columns={'Date':'date','Open':'open','Low':'low','Close':'close','Volume':'volume','High':'high'}, inplace = True)
df.reset_index(drop=True,inplace=True)
df.head(1)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 36.220001 | 36.325001 | 35.775002 | 35.875 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.96052 | 38.672945 | 34.830864 | 35.924548 | 3.55177 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 15 | 0 | 0 | 0.666667 | 0.0 | 0.142778 | 0.14681 | 0.100537 | 0.099251 |
# df.drop(['volume', 'MACD','20SD','logmomentum','absolute of 3 comp','angle of 3 comp','absolute of 6 comp','angle of 6 comp','absolute of 9 comp','angle of 9 comp'], axis='columns', inplace=True) # only keep columns that can help as residuals in Arima Hybrid
df.head(1)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | search | COVID positiveIncrease | COVID deathIncrease | bull score | bear score | fourier bull 10 | fourier bull 30 | fourier bear 10 | fourier bear 30 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 36.220001 | 36.325001 | 35.775002 | 35.875 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.96052 | 38.672945 | 34.830864 | 35.924548 | 3.55177 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 15 | 0 | 0 | 0.666667 | 0.0 | 0.142778 | 0.14681 | 0.100537 | 0.099251 |
test_len = len(X_test)
train_len = len(X_train )
test_len, train_len
(269, 808)
# Initialize moving averages from Ta-Lib, store functions in dictionary
# talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'MIDPRICE', 'T3', 'TEMA', 'TRIMA'] remove midprice due to outputbeing univariate
talib_moving_averages = ['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA', 'TRIMA']
functions = {}
for ma in talib_moving_averages:
functions[ma] = abstract.Function(ma)
# Determine kurtosis "K" values for MA period 4-30
kurtosis_results = {'period': []}
for i in range(4, 100): # 100
kurtosis_results['period'].append(i)
for ma in talib_moving_averages:
# Run moving average, remove last N days (used later for test data set), trim MA result to last 30 days
ma_output = functions[ma](df[:-test_len], i).tail(14)
# Determine kurtosis "K" value
k = kurtosis(ma_output, fisher=False)
# add to dictionary
if ma not in kurtosis_results.keys():
kurtosis_results[ma] = []
kurtosis_results[ma].append(k)
kurtosis_results = pd.DataFrame(kurtosis_results)
kurtosis_results.to_csv('kurtosis_results.csv')
kurtosis_results.head(5)
| period | SMA | EMA | WMA | DEMA | KAMA | MIDPOINT | T3 | TEMA | TRIMA | |
|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 4 | 2.272452 | 2.652772 | 2.896972 | 3.800351 | 2.299585 | 2.171369 | 1.978458 | 4.609342 | 2.411225 |
| 1 | 5 | 1.839451 | 2.355815 | 2.481058 | 3.327525 | 1.841282 | 1.826597 | 1.640277 | 4.262302 | 1.994382 |
| 2 | 6 | 1.583886 | 2.159532 | 2.194320 | 2.945924 | 1.536136 | 1.605787 | 1.510972 | 3.878845 | 1.679710 |
| 3 | 7 | 1.461290 | 2.026758 | 1.990629 | 2.651927 | 1.506197 | 1.558096 | 1.514015 | 3.510432 | 1.486348 |
| 4 | 8 | 1.447516 | 1.935302 | 1.853935 | 2.429648 | 1.509566 | 1.621595 | 1.601580 | 3.184123 | 1.373337 |
# Determine period with K closest to 3 +/-5%
optimized_period = {}
# https://pypi.org/project/TA-Lib/ determines the type of moving average to use
# https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.at.html#pandas.DataFrame.at
for ma in talib_moving_averages:
difference = np.abs(kurtosis_results[ma] - 3)
df_arimahyb = pd.DataFrame({'difference': difference, 'period': kurtosis_results['period']})
df_arimahyb = df_arimahyb.sort_values(by=['difference'], ascending=True).reset_index(drop=True)
if df_arimahyb.at[0, 'difference'] < 3 * 0.05:
optimized_period[ma] = df_arimahyb.at[0, 'period']
else:
print(ma + ' is not viable, best K greater or less than 3 +/-5%')
print('\nOptimized periods:', optimized_period)
TRIMA is not viable, best K greater or less than 3 +/-5%
Optimized periods: {'SMA': 17, 'EMA': 51, 'WMA': 49, 'DEMA': 89, 'KAMA': 18, 'MIDPOINT': 14, 'T3': 19, 'TEMA': 9}
optimized_period
{'DEMA': 89,
'EMA': 51,
'KAMA': 18,
'MIDPOINT': 14,
'SMA': 17,
'T3': 19,
'TEMA': 9,
'WMA': 49}
simulation = {}
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma in ['EMA','WMA','DEMA','KAMA','MIDPOINT']:
# print(ma)
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
low_vol.tail(20)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | search | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1060 | 140.200839 | 141.942909 | 138.524500 | 140.171495 | 139.966842 | 8.852448e+07 | 142.165478 | 146.699207 | 1.815578 | 4.572948 | 155.845103 | 137.553312 | 140.365562 | 4.935800 | 105.739092 | -0.047411 | 125.318767 | -0.018291 | 140.471430 | -0.008749 | 19.573385 |
| 1061 | 139.425914 | 141.705469 | 138.035200 | 140.698014 | 140.492650 | 8.620711e+07 | 141.528981 | 145.978836 | 2.115887 | 4.189393 | 154.357621 | 137.600050 | 140.587196 | 4.939545 | 105.263514 | -0.048037 | 124.464999 | -0.019222 | 139.335869 | -0.009472 | 19.632022 |
| 1062 | 140.773058 | 142.636405 | 139.932338 | 141.733666 | 141.526843 | 7.421445e+07 | 141.294887 | 145.298477 | 2.211018 | 3.647690 | 152.593858 | 138.003097 | 141.351509 | 4.946870 | 104.786174 | -0.048658 | 123.598217 | -0.020150 | 138.164839 | -0.010188 | 19.698090 |
| 1063 | 142.179695 | 143.266994 | 141.127848 | 142.249061 | 142.041527 | 6.519616e+07 | 141.224295 | 144.665584 | 2.093072 | 3.241276 | 151.148137 | 138.183031 | 141.949877 | 4.950518 | 104.307114 | -0.049275 | 122.718682 | -0.021074 | 136.959041 | -0.010898 | 19.763505 |
| 1064 | 142.253947 | 144.008334 | 141.546689 | 142.555532 | 142.347589 | 6.254214e+07 | 141.336839 | 144.184381 | 1.988881 | 2.884864 | 149.954110 | 138.414652 | 142.353647 | 4.952685 | 103.826381 | -0.049886 | 121.826667 | -0.021994 | 135.719217 | -0.011600 | 20.311676 |
| 1065 | 142.782738 | 143.732491 | 141.438660 | 142.125353 | 141.918068 | 6.542511e+07 | 141.385297 | 143.758659 | 1.774804 | 2.626682 | 149.012024 | 138.505294 | 142.201451 | 4.949632 | 103.344020 | -0.050491 | 120.922446 | -0.022909 | 134.446150 | -0.012293 | 20.671514 |
| 1066 | 142.153085 | 142.656915 | 140.466684 | 141.564232 | 141.357788 | 7.040262e+07 | 141.585336 | 143.387397 | 1.634667 | 2.376817 | 148.141030 | 138.633764 | 141.776638 | 4.945637 | 102.860075 | -0.051092 | 120.006305 | -0.023818 | 133.140665 | -0.012977 | 20.900131 |
| 1067 | 142.177201 | 143.194327 | 140.977156 | 142.610382 | 142.402435 | 6.948112e+07 | 141.933749 | 143.094536 | 1.573317 | 2.074153 | 147.242842 | 138.946230 | 142.332468 | 4.953023 | 102.374593 | -0.051687 | 119.078535 | -0.024722 | 131.803627 | -0.013650 | 21.038585 |
| 1068 | 143.009006 | 144.052615 | 142.286776 | 143.812497 | 143.602819 | 6.805244e+07 | 142.378675 | 142.879716 | 1.473333 | 1.874158 | 146.628032 | 139.131400 | 143.319154 | 4.961467 | 101.887619 | -0.052275 | 118.139433 | -0.025618 | 130.435938 | -0.014311 | 21.116168 |
| 1069 | 143.380322 | 145.547752 | 142.940349 | 145.397429 | 145.185452 | 7.592729e+07 | 142.902069 | 142.813890 | 1.447641 | 1.844159 | 146.502207 | 139.125573 | 144.704671 | 4.972505 | 101.399198 | -0.052858 | 117.189304 | -0.026508 | 129.038540 | -0.014959 | 21.153587 |
| 1070 | 145.337970 | 147.615882 | 144.980528 | 147.444584 | 147.229635 | 7.653090e+07 | 143.644287 | 142.961273 | 1.284466 | 2.010227 | 146.981728 | 138.940819 | 146.531280 | 4.986604 | 100.909377 | -0.053435 | 116.228458 | -0.027389 | 127.612408 | -0.015592 | 21.165321 |
| 1071 | 147.375283 | 149.163050 | 146.995423 | 148.921380 | 148.704294 | 6.811986e+07 | 144.553694 | 143.236380 | 0.961952 | 2.270386 | 147.777152 | 138.695607 | 148.124680 | 4.996737 | 100.418203 | -0.054006 | 115.257214 | -0.028261 | 126.158555 | -0.016211 | 21.161363 |
| 1072 | 148.656821 | 150.010875 | 148.071943 | 149.870634 | 149.652170 | 6.425222e+07 | 145.660163 | 143.530869 | 0.589081 | 2.556352 | 148.643574 | 138.418164 | 149.288649 | 5.003230 | 99.925720 | -0.054570 | 114.275894 | -0.029124 | 124.678027 | -0.016812 | 21.148490 |
| 1073 | 149.806550 | 150.715254 | 149.026204 | 149.977942 | 149.759331 | 6.069918e+07 | 146.862121 | 143.785380 | 0.135134 | 2.805932 | 149.397244 | 138.173516 | 149.748178 | 5.003989 | 99.431976 | -0.055128 | 113.284828 | -0.029977 | 123.171903 | -0.017396 | 21.131204 |
| 1074 | 149.937482 | 150.666013 | 149.022091 | 149.911667 | 149.693162 | 5.465321e+07 | 147.905162 | 144.001463 | -0.245163 | 3.045742 | 150.092948 | 137.909978 | 149.857170 | 5.003545 | 98.937018 | -0.055679 | 112.284350 | -0.030820 | 121.641290 | -0.017961 | 25.016406 |
| 1075 | 150.228161 | 151.254072 | 149.586503 | 150.104281 | 149.885502 | 5.602702e+07 | 148.803988 | 144.237215 | -0.571069 | 3.270011 | 150.777237 | 137.697192 | 150.021910 | 5.004835 | 98.440892 | -0.056223 | 111.274800 | -0.031650 | 120.087330 | -0.018506 | 27.455491 |
| 1076 | 150.328251 | 150.997797 | 149.591175 | 149.912656 | 149.694163 | 5.484778e+07 | 149.449021 | 144.548659 | -0.850904 | 3.458615 | 151.465890 | 137.631428 | 149.949074 | 5.003520 | 97.943645 | -0.056759 | 110.256524 | -0.032469 | 118.511190 | -0.019029 | 28.912854 |
| 1077 | 150.525566 | 152.430694 | 150.099878 | 151.531571 | 151.310718 | 7.580033e+07 | 150.032876 | 144.967153 | -0.975625 | 3.719924 | 152.407001 | 137.527305 | 151.004072 | 5.014296 | 97.445324 | -0.057289 | 109.229873 | -0.033274 | 116.914063 | -0.019528 | 29.716707 |
| 1078 | 149.301052 | 151.688142 | 148.723104 | 151.137179 | 150.916905 | 1.012990e+08 | 150.349418 | 145.413317 | -0.891585 | 3.905336 | 153.223988 | 137.602646 | 151.092810 | 5.011652 | 96.945977 | -0.057811 | 108.195203 | -0.034066 | 115.297171 | -0.020004 | 30.096629 |
| 1079 | 149.321425 | 151.018197 | 148.455004 | 150.396057 | 150.176865 | 9.262134e+07 | 150.424479 | 145.823313 | -0.852689 | 3.878291 | 153.579894 | 138.066731 | 150.628308 | 5.006660 | 96.445650 | -0.058325 | 107.152874 | -0.034844 | 113.661756 | -0.020453 | 27.283213 |
high_vol.head(10)
| open | high | low | close | Adj Close | volume | MA7 | MA21 | MACD | 20SD | upper_band | lower_band | EMA | logmomentum | absolute of 3 comp | angle of 3 comp | absolute of 6 comp | angle of 6 comp | absolute of 9 comp | angle of 9 comp | search | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 36.220001 | 36.325001 | 35.775002 | 35.875000 | 34.054882 | 57111200.0 | 36.173571 | 36.751904 | 0.303356 | 0.960520 | 38.672945 | 34.830864 | 35.924548 | 3.551770 | 38.458011 | 0.046984 | 29.704545 | 0.102857 | 43.304973 | -0.053955 | 15.0 |
| 1 | 35.922501 | 36.197498 | 35.680000 | 36.022499 | 34.194897 | 86278400.0 | 36.095357 | 36.634762 | 0.328795 | 0.852735 | 38.340231 | 34.929292 | 35.989849 | 3.555991 | 38.240991 | 0.049445 | 29.954520 | 0.099254 | 43.438321 | -0.053936 | 15.0 |
| 2 | 35.755001 | 35.875000 | 35.602501 | 35.682499 | 33.872143 | 96515200.0 | 35.984999 | 36.495238 | 0.346702 | 0.677629 | 37.850495 | 35.139980 | 35.784949 | 3.546235 | 38.027974 | 0.051918 | 30.209839 | 0.095602 | 43.557403 | -0.053820 | 15.0 |
| 3 | 35.724998 | 36.187500 | 35.724998 | 36.044998 | 34.216255 | 76806800.0 | 36.001071 | 36.362023 | 0.387422 | 0.387634 | 37.137291 | 35.586756 | 35.958315 | 3.556633 | 37.818962 | 0.054401 | 30.470232 | 0.091907 | 43.662260 | -0.053608 | 15.0 |
| 4 | 36.027500 | 36.487499 | 35.842499 | 36.264999 | 34.425095 | 84362400.0 | 35.973571 | 36.243809 | 0.388315 | 0.308042 | 36.859893 | 35.627725 | 36.162771 | 3.562891 | 37.613953 | 0.056893 | 30.735430 | 0.088177 | 43.752965 | -0.053302 | 14.0 |
| 5 | 36.182499 | 36.462502 | 36.095001 | 36.382500 | 34.536625 | 79127200.0 | 36.039642 | 36.202738 | 0.372153 | 0.308860 | 36.820458 | 35.585018 | 36.309257 | 3.566217 | 37.412947 | 0.059392 | 31.005161 | 0.084416 | 43.829622 | -0.052901 | 14.0 |
| 6 | 36.467499 | 36.544998 | 36.205002 | 36.435001 | 34.586472 | 99538000.0 | 36.101071 | 36.206547 | 0.317572 | 0.295861 | 36.798268 | 35.614826 | 36.393086 | 3.567700 | 37.215939 | 0.061899 | 31.279154 | 0.080632 | 43.892360 | -0.052406 | 14.0 |
| 7 | 36.375000 | 37.122501 | 36.360001 | 36.942501 | 35.068211 | 100797600.0 | 36.253571 | 36.220595 | 0.322643 | 0.340687 | 36.901969 | 35.539221 | 36.759363 | 3.581920 | 37.022928 | 0.064410 | 31.557136 | 0.076830 | 43.941338 | -0.051818 | 14.0 |
| 8 | 36.992500 | 37.332500 | 36.832500 | 37.259998 | 35.369610 | 80528400.0 | 36.430357 | 36.266785 | 0.257925 | 0.410484 | 37.087753 | 35.445818 | 37.093120 | 3.590715 | 36.833908 | 0.066926 | 31.838833 | 0.073014 | 43.976744 | -0.051137 | 14.0 |
| 9 | 37.205002 | 37.724998 | 37.142502 | 37.389999 | 35.493000 | 95174000.0 | 36.674285 | 36.329523 | 0.184267 | 0.445597 | 37.220717 | 35.438330 | 37.291039 | 3.594294 | 36.648875 | 0.069445 | 32.123972 | 0.069192 | 43.998789 | -0.050365 | 16.0 |
def get_arima(dataframe,original_data, train_len, test_len):
# prepare train and test data
X_value = pd.DataFrame(dataframe.iloc[:, :])
y_value = pd.DataFrame(dataframe.iloc[:, 3])
X_train, X_test = split_train_test(X_value)
y_train, y_test = split_train_test(y_value)
yc_train,yc_test = split_train_test(original_data)
# y_train_ = y_train['close'].to_list()
# y_test_ = y_test['close'].to_list()
yc = yc_test.values.tolist()
y_train_list = y_train['close'].values.tolist()
y_test_list = y_test['close'].values.tolist()
# Initialize model
model = auto_arima(y_train_list,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
print(model.summary())
# Determine model parameters
model.fit(y_train_list,disp= 0)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list,disp= 0)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
# Generate error data
mse = mean_squared_error(yc_test, prediction)
rmse = mse ** 0.5
# mape = mean_absolute_percentage_error(pd.Series(yc_test).values.tolist(), pd.Series(predictionte).values.tolist() )
mae = mean_absolute_error(pd.Series(yc_test).values.tolist() , pd.Series(prediction).values.tolist() )
return yc, prediction, mse, rmse, mae
def plot_train(simulation,SIM):
train_predict_index = np.load("index_train_appl.npy", allow_pickle=True)#Dates for train data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['final_tr']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['final_tr']['prediction'][i], columns=["predicted_price"],
index=train_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
#This is a dataframe with each column containing the predicted daily closing price
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['final_tr']['original'])):
y_train = pd.DataFrame(simulation[SIM]['final_tr']['original'][i], columns=["real_price"],
index=train_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False) #This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Training for {SIM}", fontsize=20)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"----- Train RMSE for {SIM} -----", RMSE)
print(f"----- Train_MSE_LSTM for {SIM} -----", MSE)
print(f"----- Train MAE LSTM for {SIM} -----", MAE)
def plot_test(simulation, SIM):
test_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data
# rescaled_real_y = y_scaler.inverse_transform(y_train)#Real closing price data
# rescaled_predicted_y = y_scaler.inverse_transform(train_yhat)#Predicted closing price data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['final']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['final']['prediction'][i], columns=["predicted_price"],
index=test_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)#This is a dataframe with each column containing the predicted daily closing price
#
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['final']['original'])):
y_train = pd.DataFrame(simulation[SIM]['final']['original'][i], columns=["real_price"],
index=test_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False)#This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Testing for {SIM}", fontsize=20)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"----- Test RMSE for {SIM}-----", RMSE)
print(f"----- Test_MSE_LSTM for {SIM}-----", MSE)
print(f"----- Test_MAE_LSTM for {SIM}-----", MAE)
def plot_train_high(simulation, SIM):
train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['high_vol']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['high_vol']['prediction'][i], columns=["predicted_price"],
index=train_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
#This is a dataframe with each column containing the predicted daily closing price
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['high_vol']['original'])):
y_train = pd.DataFrame(simulation[SIM]['high_vol']['original'][i], columns=["real_price"],
index=train_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False) #This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Training for {SIM}", fontsize=20)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"----- Individual LSTM RMSE for {SIM} -----", RMSE)
print(f"----- Individual LSTM_MSE_LSTM for {SIM} -----", MSE)
print(f"----- Individual LSTM MAE LSTM for {SIM} -----", MAE)
def plot_train_low(simulation , SIM):
train_predict_index = np.load("index_test_appl.npy", allow_pickle=True)#Dates for train data
predict_result = pd.DataFrame()
for i in range(len(simulation[SIM]['low_vol']['prediction'])):
y_predict = pd.DataFrame(simulation[SIM]['low_vol']['prediction'][i], columns=["predicted_price"],
index=train_predict_index[i:i + output_dim])
predict_result = pd.concat([predict_result, y_predict], axis=1, sort=False)
#This is a dataframe with each column containing the predicted daily closing price
real_price = pd.DataFrame()
for i in range(len(simulation[SIM]['low_vol']['original'])):
y_train = pd.DataFrame(simulation[SIM]['low_vol']['original'][i], columns=["real_price"],
index=train_predict_index[i:i + output_dim])
real_price = pd.concat([real_price, y_train], axis=1, sort=False) #This is a dataframe with each column containing the real daily closing price
predict_result['predicted_mean'] = predict_result.mean(axis=1)#Adding a column with the daily predicted closing price value
real_price['real_mean'] = real_price.mean(axis=1)#Adding a column with the daily real closing price value
#
# Plot the predicted result
plt.figure(figsize=(16, 8))
plt.plot(real_price["real_mean"])
plt.plot(predict_result["predicted_mean"], color='r')
plt.xlabel("Date")
plt.ylabel("Stock price")
plt.legend(("Real price", "Predicted price"), loc="upper left", fontsize=16)
plt.title(f"The result of Training for {SIM}", fontsize=20)
plt.show()
# Calculate RMSE
predicted = predict_result["predicted_mean"]
real = real_price["real_mean"]
RMSE = np.sqrt(mean_squared_error(predicted, real))
MSE = mean_squared_error(predicted, real)
MAE = mean_absolute_error(predicted, real)
print(f"-----Arima RMSE for {SIM} -----", RMSE)
print(f"----- Arima MSE for {SIM} -----", MSE)
print(f"----- Arima MAE for {SIM} -----", MAE)
import os
os.getcwd()
'/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid/Outputs/full'
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
model.add(Dense(units=64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
## Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation1 = {}
imgfile = 'Experiment1'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation1[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation1_data.json', 'w') as fp:
json.dump(simulation1, fp)
for ma in simulation1.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation1[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation1[ma]['final']['mse'],
'\nRMSE:\t', simulation1[ma]['final']['rmse'],
'\nMAPE:\t', simulation1[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
# code you want to evaluate
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.48 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.23 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.11 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.74 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.77 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.697 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 15:35:45 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_48 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_48 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.49136, saving model to LSTM1.h5 48/48 - 2s - loss: 0.2128 - val_loss: 0.4914 - lr: 0.0010 - 2s/epoch - 40ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.49136 to 0.03252, saving model to LSTM1.h5 48/48 - 0s - loss: 0.1132 - val_loss: 0.0325 - lr: 0.0010 - 474ms/epoch - 10ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0752 - val_loss: 0.4517 - lr: 0.0010 - 409ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0775 - val_loss: 0.1138 - lr: 0.0010 - 405ms/epoch - 8ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0807 - val_loss: 0.2971 - lr: 0.0010 - 484ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0607 - val_loss: 0.1280 - lr: 0.0010 - 392ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0445 - val_loss: 0.2664 - lr: 0.0010 - 438ms/epoch - 9ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0460 - val_loss: 0.2209 - lr: 1.0000e-04 - 431ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0442 - val_loss: 0.1911 - lr: 1.0000e-04 - 415ms/epoch - 9ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0429 - val_loss: 0.1582 - lr: 1.0000e-04 - 392ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0394 - val_loss: 0.1458 - lr: 1.0000e-04 - 431ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0360 - val_loss: 0.1350 - lr: 1.0000e-04 - 444ms/epoch - 9ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0347 - val_loss: 0.1336 - lr: 1.0000e-05 - 426ms/epoch - 9ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0312 - val_loss: 0.1331 - lr: 1.0000e-05 - 482ms/epoch - 10ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0333 - val_loss: 0.1328 - lr: 1.0000e-05 - 390ms/epoch - 8ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0313 - val_loss: 0.1316 - lr: 1.0000e-05 - 419ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0361 - val_loss: 0.1306 - lr: 1.0000e-05 - 384ms/epoch - 8ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0350 - val_loss: 0.1300 - lr: 1.0000e-05 - 392ms/epoch - 8ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0374 - val_loss: 0.1300 - lr: 1.0000e-05 - 442ms/epoch - 9ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0332 - val_loss: 0.1293 - lr: 1.0000e-05 - 437ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0343 - val_loss: 0.1283 - lr: 1.0000e-05 - 429ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0321 - val_loss: 0.1297 - lr: 1.0000e-05 - 388ms/epoch - 8ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0327 - val_loss: 0.1286 - lr: 1.0000e-05 - 431ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0354 - val_loss: 0.1280 - lr: 1.0000e-05 - 419ms/epoch - 9ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0300 - val_loss: 0.1262 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0371 - val_loss: 0.1251 - lr: 1.0000e-05 - 399ms/epoch - 8ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0310 - val_loss: 0.1259 - lr: 1.0000e-05 - 380ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0346 - val_loss: 0.1260 - lr: 1.0000e-05 - 413ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0319 - val_loss: 0.1261 - lr: 1.0000e-05 - 433ms/epoch - 9ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0355 - val_loss: 0.1244 - lr: 1.0000e-05 - 415ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0349 - val_loss: 0.1222 - lr: 1.0000e-05 - 403ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0334 - val_loss: 0.1220 - lr: 1.0000e-05 - 399ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.03252 48/48 - 1s - loss: 0.0329 - val_loss: 0.1206 - lr: 1.0000e-05 - 508ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0303 - val_loss: 0.1205 - lr: 1.0000e-05 - 394ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0347 - val_loss: 0.1183 - lr: 1.0000e-05 - 402ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0317 - val_loss: 0.1193 - lr: 1.0000e-05 - 398ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0334 - val_loss: 0.1187 - lr: 1.0000e-05 - 382ms/epoch - 8ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0332 - val_loss: 0.1186 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0330 - val_loss: 0.1176 - lr: 1.0000e-05 - 420ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0339 - val_loss: 0.1172 - lr: 1.0000e-05 - 452ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0320 - val_loss: 0.1174 - lr: 1.0000e-05 - 446ms/epoch - 9ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0300 - val_loss: 0.1173 - lr: 1.0000e-05 - 383ms/epoch - 8ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0358 - val_loss: 0.1162 - lr: 1.0000e-05 - 429ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0315 - val_loss: 0.1137 - lr: 1.0000e-05 - 443ms/epoch - 9ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.03252 48/48 - 1s - loss: 0.0318 - val_loss: 0.1128 - lr: 1.0000e-05 - 508ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0355 - val_loss: 0.1126 - lr: 1.0000e-05 - 413ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0350 - val_loss: 0.1122 - lr: 1.0000e-05 - 426ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0328 - val_loss: 0.1127 - lr: 1.0000e-05 - 469ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0336 - val_loss: 0.1123 - lr: 1.0000e-05 - 398ms/epoch - 8ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0347 - val_loss: 0.1106 - lr: 1.0000e-05 - 447ms/epoch - 9ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0322 - val_loss: 0.1083 - lr: 1.0000e-05 - 467ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.03252 48/48 - 0s - loss: 0.0330 - val_loss: 0.1067 - lr: 1.0000e-05 - 418ms/epoch - 9ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.43 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.27 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.08 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.88 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.64 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.681 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 15:37:18 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_49 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_49 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 1.23130, saving model to LSTM1.h5 16/16 - 2s - loss: 0.1617 - val_loss: 1.2313 - lr: 0.0010 - 2s/epoch - 99ms/step Epoch 2/500 Epoch 00002: val_loss improved from 1.23130 to 0.03856, saving model to LSTM1.h5 16/16 - 0s - loss: 0.0921 - val_loss: 0.0386 - lr: 0.0010 - 176ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.3979 - val_loss: 0.0917 - lr: 0.0010 - 160ms/epoch - 10ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.1546 - val_loss: 0.1976 - lr: 0.0010 - 147ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0758 - val_loss: 0.1013 - lr: 0.0010 - 143ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0642 - val_loss: 0.0943 - lr: 0.0010 - 158ms/epoch - 10ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0524 - val_loss: 0.0848 - lr: 0.0010 - 162ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0504 - val_loss: 0.0865 - lr: 1.0000e-04 - 140ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0511 - val_loss: 0.0918 - lr: 1.0000e-04 - 153ms/epoch - 10ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0502 - val_loss: 0.0940 - lr: 1.0000e-04 - 154ms/epoch - 10ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0514 - val_loss: 0.0925 - lr: 1.0000e-04 - 152ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0457 - val_loss: 0.0894 - lr: 1.0000e-04 - 174ms/epoch - 11ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0462 - val_loss: 0.0897 - lr: 1.0000e-05 - 159ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0380 - val_loss: 0.0898 - lr: 1.0000e-05 - 178ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0469 - val_loss: 0.0898 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0442 - val_loss: 0.0898 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0457 - val_loss: 0.0899 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0446 - val_loss: 0.0899 - lr: 1.0000e-05 - 158ms/epoch - 10ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0461 - val_loss: 0.0897 - lr: 1.0000e-05 - 187ms/epoch - 12ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0463 - val_loss: 0.0893 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0468 - val_loss: 0.0888 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0439 - val_loss: 0.0886 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0480 - val_loss: 0.0884 - lr: 1.0000e-05 - 161ms/epoch - 10ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0425 - val_loss: 0.0883 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0418 - val_loss: 0.0886 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0400 - val_loss: 0.0886 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0462 - val_loss: 0.0887 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0472 - val_loss: 0.0882 - lr: 1.0000e-05 - 173ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0441 - val_loss: 0.0880 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0488 - val_loss: 0.0882 - lr: 1.0000e-05 - 171ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0443 - val_loss: 0.0880 - lr: 1.0000e-05 - 178ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0478 - val_loss: 0.0876 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0426 - val_loss: 0.0871 - lr: 1.0000e-05 - 161ms/epoch - 10ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0439 - val_loss: 0.0871 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0435 - val_loss: 0.0869 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0400 - val_loss: 0.0874 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0463 - val_loss: 0.0875 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0443 - val_loss: 0.0871 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0469 - val_loss: 0.0871 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0461 - val_loss: 0.0865 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0434 - val_loss: 0.0862 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0430 - val_loss: 0.0864 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0430 - val_loss: 0.0859 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0440 - val_loss: 0.0857 - lr: 1.0000e-05 - 164ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0438 - val_loss: 0.0854 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0425 - val_loss: 0.0853 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0445 - val_loss: 0.0846 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0461 - val_loss: 0.0848 - lr: 1.0000e-05 - 155ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0432 - val_loss: 0.0848 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0427 - val_loss: 0.0845 - lr: 1.0000e-05 - 157ms/epoch - 10ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0426 - val_loss: 0.0841 - lr: 1.0000e-05 - 179ms/epoch - 11ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.03856 16/16 - 0s - loss: 0.0439 - val_loss: 0.0840 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 28.603272766421263
RMSE: 5.348202760406646
MAPE: 4.3952252144553965
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.42 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.24 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.29 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.44 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.809 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 15:38:38 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_50 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_50 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.74867, saving model to LSTM1.h5 17/17 - 2s - loss: 0.4575 - val_loss: 0.7487 - lr: 0.0010 - 2s/epoch - 93ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.74867 to 0.25174, saving model to LSTM1.h5 17/17 - 0s - loss: 0.1118 - val_loss: 0.2517 - lr: 0.0010 - 195ms/epoch - 11ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.25174 to 0.15433, saving model to LSTM1.h5 17/17 - 0s - loss: 0.1083 - val_loss: 0.1543 - lr: 0.0010 - 172ms/epoch - 10ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.15433 17/17 - 0s - loss: 0.1324 - val_loss: 0.1817 - lr: 0.0010 - 162ms/epoch - 10ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.15433 to 0.09425, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0746 - val_loss: 0.0942 - lr: 0.0010 - 174ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.09425 to 0.06911, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0537 - val_loss: 0.0691 - lr: 0.0010 - 177ms/epoch - 10ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.06911 17/17 - 0s - loss: 0.0534 - val_loss: 0.0721 - lr: 0.0010 - 163ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: val_loss improved from 0.06911 to 0.05185, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0602 - val_loss: 0.0519 - lr: 0.0010 - 204ms/epoch - 12ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.05185 17/17 - 0s - loss: 0.0471 - val_loss: 0.0846 - lr: 0.0010 - 164ms/epoch - 10ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.05185 17/17 - 0s - loss: 0.0429 - val_loss: 0.0621 - lr: 0.0010 - 160ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.05185 to 0.05073, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0459 - val_loss: 0.0507 - lr: 0.0010 - 161ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.05073 to 0.04830, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0351 - val_loss: 0.0483 - lr: 0.0010 - 166ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.04830 17/17 - 0s - loss: 0.0385 - val_loss: 0.0662 - lr: 0.0010 - 166ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.04830 17/17 - 0s - loss: 0.0319 - val_loss: 0.0607 - lr: 0.0010 - 163ms/epoch - 10ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.04830 17/17 - 0s - loss: 0.0315 - val_loss: 0.0621 - lr: 0.0010 - 167ms/epoch - 10ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.04830 17/17 - 0s - loss: 0.0306 - val_loss: 0.0662 - lr: 0.0010 - 153ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: val_loss improved from 0.04830 to 0.04409, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0321 - val_loss: 0.0441 - lr: 0.0010 - 167ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.04409 17/17 - 0s - loss: 0.0254 - val_loss: 0.0749 - lr: 0.0010 - 156ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.04409 17/17 - 0s - loss: 0.0323 - val_loss: 0.0527 - lr: 0.0010 - 162ms/epoch - 10ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.04409 17/17 - 0s - loss: 0.0318 - val_loss: 0.0809 - lr: 0.0010 - 160ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss improved from 0.04409 to 0.03896, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0275 - val_loss: 0.0390 - lr: 0.0010 - 227ms/epoch - 13ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.03896 17/17 - 0s - loss: 0.0314 - val_loss: 0.0715 - lr: 0.0010 - 152ms/epoch - 9ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.03896 17/17 - 0s - loss: 0.0339 - val_loss: 0.0443 - lr: 0.0010 - 148ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: val_loss improved from 0.03896 to 0.02617, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0369 - val_loss: 0.0262 - lr: 0.0010 - 178ms/epoch - 10ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.02617 17/17 - 0s - loss: 0.0455 - val_loss: 0.0532 - lr: 0.0010 - 166ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.02617 17/17 - 0s - loss: 0.0427 - val_loss: 0.0415 - lr: 0.0010 - 167ms/epoch - 10ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.02617 17/17 - 0s - loss: 0.0466 - val_loss: 0.0819 - lr: 0.0010 - 153ms/epoch - 9ms/step Epoch 28/500 Epoch 00028: val_loss improved from 0.02617 to 0.02424, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0300 - val_loss: 0.0242 - lr: 0.0010 - 180ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.02424 17/17 - 0s - loss: 0.0331 - val_loss: 0.0634 - lr: 0.0010 - 173ms/epoch - 10ms/step Epoch 30/500 Epoch 00030: val_loss improved from 0.02424 to 0.02407, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0271 - val_loss: 0.0241 - lr: 0.0010 - 173ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.02407 17/17 - 0s - loss: 0.0243 - val_loss: 0.0297 - lr: 0.0010 - 186ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.02407 17/17 - 0s - loss: 0.0235 - val_loss: 0.0328 - lr: 0.0010 - 150ms/epoch - 9ms/step Epoch 33/500 Epoch 00033: val_loss improved from 0.02407 to 0.02338, saving model to LSTM1.h5 17/17 - 0s - loss: 0.0233 - val_loss: 0.0234 - lr: 0.0010 - 171ms/epoch - 10ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0260 - val_loss: 0.0238 - lr: 0.0010 - 155ms/epoch - 9ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0222 - val_loss: 0.1222 - lr: 0.0010 - 163ms/epoch - 10ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0231 - val_loss: 0.0304 - lr: 0.0010 - 154ms/epoch - 9ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0271 - val_loss: 0.0643 - lr: 0.0010 - 222ms/epoch - 13ms/step Epoch 38/500 Epoch 00038: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00038: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0225 - val_loss: 0.0321 - lr: 0.0010 - 179ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0224 - val_loss: 0.0361 - lr: 1.0000e-04 - 163ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0231 - val_loss: 0.0362 - lr: 1.0000e-04 - 156ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0217 - val_loss: 0.0379 - lr: 1.0000e-04 - 148ms/epoch - 9ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0202 - val_loss: 0.0373 - lr: 1.0000e-04 - 155ms/epoch - 9ms/step Epoch 43/500 Epoch 00043: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00043: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0205 - val_loss: 0.0361 - lr: 1.0000e-04 - 158ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0211 - val_loss: 0.0360 - lr: 1.0000e-05 - 165ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0207 - val_loss: 0.0358 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0193 - val_loss: 0.0357 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0204 - val_loss: 0.0357 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00048: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0213 - val_loss: 0.0358 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0206 - val_loss: 0.0359 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0212 - val_loss: 0.0359 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0217 - val_loss: 0.0361 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0217 - val_loss: 0.0363 - lr: 1.0000e-05 - 176ms/epoch - 10ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0212 - val_loss: 0.0363 - lr: 1.0000e-05 - 190ms/epoch - 11ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0202 - val_loss: 0.0363 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0203 - val_loss: 0.0362 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0202 - val_loss: 0.0361 - lr: 1.0000e-05 - 170ms/epoch - 10ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0225 - val_loss: 0.0360 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0206 - val_loss: 0.0360 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0213 - val_loss: 0.0359 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0200 - val_loss: 0.0360 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0187 - val_loss: 0.0359 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0204 - val_loss: 0.0359 - lr: 1.0000e-05 - 170ms/epoch - 10ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0210 - val_loss: 0.0360 - lr: 1.0000e-05 - 183ms/epoch - 11ms/step Epoch 64/500 Epoch 00064: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0201 - val_loss: 0.0362 - lr: 1.0000e-05 - 180ms/epoch - 11ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0198 - val_loss: 0.0365 - lr: 1.0000e-05 - 189ms/epoch - 11ms/step Epoch 66/500 Epoch 00066: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0218 - val_loss: 0.0369 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step Epoch 67/500 Epoch 00067: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0209 - val_loss: 0.0367 - lr: 1.0000e-05 - 170ms/epoch - 10ms/step Epoch 68/500 Epoch 00068: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0213 - val_loss: 0.0369 - lr: 1.0000e-05 - 164ms/epoch - 10ms/step Epoch 69/500 Epoch 00069: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0215 - val_loss: 0.0369 - lr: 1.0000e-05 - 170ms/epoch - 10ms/step Epoch 70/500 Epoch 00070: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0207 - val_loss: 0.0371 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step Epoch 71/500 Epoch 00071: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0203 - val_loss: 0.0371 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step Epoch 72/500 Epoch 00072: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0213 - val_loss: 0.0371 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step Epoch 73/500 Epoch 00073: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0207 - val_loss: 0.0370 - lr: 1.0000e-05 - 156ms/epoch - 9ms/step Epoch 74/500 Epoch 00074: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0231 - val_loss: 0.0371 - lr: 1.0000e-05 - 173ms/epoch - 10ms/step Epoch 75/500 Epoch 00075: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0232 - val_loss: 0.0368 - lr: 1.0000e-05 - 158ms/epoch - 9ms/step Epoch 76/500 Epoch 00076: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0205 - val_loss: 0.0367 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step Epoch 77/500 Epoch 00077: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0197 - val_loss: 0.0365 - lr: 1.0000e-05 - 167ms/epoch - 10ms/step Epoch 78/500 Epoch 00078: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0203 - val_loss: 0.0364 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 79/500 Epoch 00079: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0211 - val_loss: 0.0365 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step Epoch 80/500 Epoch 00080: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0221 - val_loss: 0.0365 - lr: 1.0000e-05 - 174ms/epoch - 10ms/step Epoch 81/500 Epoch 00081: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0195 - val_loss: 0.0367 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step Epoch 82/500 Epoch 00082: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0189 - val_loss: 0.0368 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step Epoch 83/500 Epoch 00083: val_loss did not improve from 0.02338 17/17 - 0s - loss: 0.0204 - val_loss: 0.0367 - lr: 1.0000e-05 - 157ms/epoch - 9ms/step Epoch 00083: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 28.603272766421263
RMSE: 5.348202760406646
MAPE: 4.3952252144553965
WMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 80.72598349686672
RMSE: 8.9847639644493
MAPE: 7.266216353433966
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.42 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.08 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.94 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.89 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.051 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 15:40:07 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_51 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_51 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.41242, saving model to LSTM1.h5 10/10 - 2s - loss: 0.3561 - val_loss: 0.4124 - lr: 0.0010 - 2s/epoch - 183ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.41242 to 0.05022, saving model to LSTM1.h5 10/10 - 0s - loss: 0.3033 - val_loss: 0.0502 - lr: 0.0010 - 121ms/epoch - 12ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.05022 10/10 - 0s - loss: 0.1002 - val_loss: 0.2789 - lr: 0.0010 - 100ms/epoch - 10ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.05022 10/10 - 0s - loss: 0.0701 - val_loss: 0.3588 - lr: 0.0010 - 110ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.05022 10/10 - 0s - loss: 0.0564 - val_loss: 0.1735 - lr: 0.0010 - 102ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.05022 10/10 - 0s - loss: 0.0572 - val_loss: 0.0565 - lr: 0.0010 - 113ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss improved from 0.05022 to 0.02965, saving model to LSTM1.h5 10/10 - 0s - loss: 0.0522 - val_loss: 0.0296 - lr: 0.0010 - 142ms/epoch - 14ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0440 - val_loss: 0.1053 - lr: 0.0010 - 115ms/epoch - 12ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0428 - val_loss: 0.1048 - lr: 0.0010 - 115ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0406 - val_loss: 0.0567 - lr: 0.0010 - 104ms/epoch - 10ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0399 - val_loss: 0.0881 - lr: 0.0010 - 100ms/epoch - 10ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00012: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0338 - val_loss: 0.0729 - lr: 0.0010 - 132ms/epoch - 13ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0333 - val_loss: 0.0715 - lr: 1.0000e-04 - 107ms/epoch - 11ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0354 - val_loss: 0.0703 - lr: 1.0000e-04 - 127ms/epoch - 13ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0331 - val_loss: 0.0707 - lr: 1.0000e-04 - 114ms/epoch - 11ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0321 - val_loss: 0.0720 - lr: 1.0000e-04 - 92ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00017: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0338 - val_loss: 0.0712 - lr: 1.0000e-04 - 107ms/epoch - 11ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0312 - val_loss: 0.0709 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0333 - val_loss: 0.0712 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0327 - val_loss: 0.0712 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0347 - val_loss: 0.0712 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step Epoch 22/500 Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00022: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0346 - val_loss: 0.0711 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0321 - val_loss: 0.0714 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0330 - val_loss: 0.0714 - lr: 1.0000e-05 - 125ms/epoch - 12ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0308 - val_loss: 0.0715 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0346 - val_loss: 0.0716 - lr: 1.0000e-05 - 105ms/epoch - 10ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0335 - val_loss: 0.0716 - lr: 1.0000e-05 - 121ms/epoch - 12ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0313 - val_loss: 0.0712 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0339 - val_loss: 0.0710 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0324 - val_loss: 0.0709 - lr: 1.0000e-05 - 132ms/epoch - 13ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0294 - val_loss: 0.0711 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0340 - val_loss: 0.0712 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0329 - val_loss: 0.0714 - lr: 1.0000e-05 - 124ms/epoch - 12ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0329 - val_loss: 0.0713 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0325 - val_loss: 0.0713 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0329 - val_loss: 0.0714 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0343 - val_loss: 0.0712 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0338 - val_loss: 0.0714 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0333 - val_loss: 0.0716 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0340 - val_loss: 0.0716 - lr: 1.0000e-05 - 119ms/epoch - 12ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0354 - val_loss: 0.0713 - lr: 1.0000e-05 - 134ms/epoch - 13ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0319 - val_loss: 0.0711 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0341 - val_loss: 0.0709 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0320 - val_loss: 0.0706 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0311 - val_loss: 0.0700 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0299 - val_loss: 0.0702 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0322 - val_loss: 0.0701 - lr: 1.0000e-05 - 122ms/epoch - 12ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0323 - val_loss: 0.0701 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0317 - val_loss: 0.0701 - lr: 1.0000e-05 - 137ms/epoch - 14ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0329 - val_loss: 0.0704 - lr: 1.0000e-05 - 113ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0340 - val_loss: 0.0705 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0309 - val_loss: 0.0704 - lr: 1.0000e-05 - 123ms/epoch - 12ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0292 - val_loss: 0.0704 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0322 - val_loss: 0.0699 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0323 - val_loss: 0.0698 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0343 - val_loss: 0.0697 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.02965 10/10 - 0s - loss: 0.0348 - val_loss: 0.0696 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step Epoch 00057: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 28.603272766421263
RMSE: 5.348202760406646
MAPE: 4.3952252144553965
WMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 80.72598349686672
RMSE: 8.9847639644493
MAPE: 7.266216353433966
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 74.8946292448382
RMSE: 8.654168316183721
MAPE: 7.175854729849037
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.27 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.22 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.72 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.20 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.000 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 15:41:18 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_52 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_52 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.05005, saving model to LSTM1.h5 45/45 - 2s - loss: 0.2498 - val_loss: 0.0501 - lr: 0.0010 - 2s/epoch - 39ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0990 - val_loss: 0.1124 - lr: 0.0010 - 421ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.05005 45/45 - 1s - loss: 0.0618 - val_loss: 0.7416 - lr: 0.0010 - 527ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0480 - val_loss: 0.3445 - lr: 0.0010 - 390ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0459 - val_loss: 0.0789 - lr: 0.0010 - 392ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0464 - val_loss: 0.3272 - lr: 0.0010 - 390ms/epoch - 9ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0477 - val_loss: 0.2992 - lr: 1.0000e-04 - 377ms/epoch - 8ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0422 - val_loss: 0.2710 - lr: 1.0000e-04 - 401ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0349 - val_loss: 0.2479 - lr: 1.0000e-04 - 368ms/epoch - 8ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0349 - val_loss: 0.2289 - lr: 1.0000e-04 - 404ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0361 - val_loss: 0.2117 - lr: 1.0000e-04 - 403ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0343 - val_loss: 0.2096 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0356 - val_loss: 0.2075 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0331 - val_loss: 0.2053 - lr: 1.0000e-05 - 449ms/epoch - 10ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0319 - val_loss: 0.2034 - lr: 1.0000e-05 - 367ms/epoch - 8ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0346 - val_loss: 0.2010 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0329 - val_loss: 0.1987 - lr: 1.0000e-05 - 413ms/epoch - 9ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0316 - val_loss: 0.1968 - lr: 1.0000e-05 - 387ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0346 - val_loss: 0.1946 - lr: 1.0000e-05 - 424ms/epoch - 9ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0320 - val_loss: 0.1928 - lr: 1.0000e-05 - 411ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0318 - val_loss: 0.1909 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0322 - val_loss: 0.1890 - lr: 1.0000e-05 - 453ms/epoch - 10ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0350 - val_loss: 0.1881 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0351 - val_loss: 0.1859 - lr: 1.0000e-05 - 399ms/epoch - 9ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0314 - val_loss: 0.1844 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0323 - val_loss: 0.1824 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0350 - val_loss: 0.1806 - lr: 1.0000e-05 - 414ms/epoch - 9ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0326 - val_loss: 0.1782 - lr: 1.0000e-05 - 406ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0315 - val_loss: 0.1759 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0316 - val_loss: 0.1735 - lr: 1.0000e-05 - 426ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0329 - val_loss: 0.1711 - lr: 1.0000e-05 - 376ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0317 - val_loss: 0.1687 - lr: 1.0000e-05 - 419ms/epoch - 9ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0305 - val_loss: 0.1668 - lr: 1.0000e-05 - 406ms/epoch - 9ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0326 - val_loss: 0.1648 - lr: 1.0000e-05 - 429ms/epoch - 10ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0335 - val_loss: 0.1631 - lr: 1.0000e-05 - 375ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0356 - val_loss: 0.1611 - lr: 1.0000e-05 - 384ms/epoch - 9ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0321 - val_loss: 0.1597 - lr: 1.0000e-05 - 398ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0300 - val_loss: 0.1575 - lr: 1.0000e-05 - 467ms/epoch - 10ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0322 - val_loss: 0.1557 - lr: 1.0000e-05 - 437ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0322 - val_loss: 0.1546 - lr: 1.0000e-05 - 413ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0317 - val_loss: 0.1525 - lr: 1.0000e-05 - 457ms/epoch - 10ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0308 - val_loss: 0.1512 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0309 - val_loss: 0.1494 - lr: 1.0000e-05 - 404ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0304 - val_loss: 0.1479 - lr: 1.0000e-05 - 409ms/epoch - 9ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0319 - val_loss: 0.1458 - lr: 1.0000e-05 - 405ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0291 - val_loss: 0.1441 - lr: 1.0000e-05 - 425ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0314 - val_loss: 0.1428 - lr: 1.0000e-05 - 413ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0306 - val_loss: 0.1408 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0330 - val_loss: 0.1393 - lr: 1.0000e-05 - 397ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0319 - val_loss: 0.1371 - lr: 1.0000e-05 - 380ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.05005 45/45 - 0s - loss: 0.0272 - val_loss: 0.1343 - lr: 1.0000e-05 - 375ms/epoch - 8ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 28.603272766421263
RMSE: 5.348202760406646
MAPE: 4.3952252144553965
WMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 80.72598349686672
RMSE: 8.9847639644493
MAPE: 7.266216353433966
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 74.8946292448382
RMSE: 8.654168316183721
MAPE: 7.175854729849037
KAMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 23.77011386693893
RMSE: 4.87546037487117
MAPE: 3.900500517739451
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.23 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.08 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.26 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.86 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.22 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.186 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 15:42:48 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_53 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_53 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.14279, saving model to LSTM1.h5 58/58 - 2s - loss: 0.1302 - val_loss: 0.1428 - lr: 0.0010 - 2s/epoch - 32ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.14279 to 0.01434, saving model to LSTM1.h5 58/58 - 1s - loss: 0.0798 - val_loss: 0.0143 - lr: 0.0010 - 526ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0806 - val_loss: 0.4012 - lr: 0.0010 - 519ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0506 - val_loss: 0.1824 - lr: 0.0010 - 496ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0458 - val_loss: 0.2783 - lr: 0.0010 - 514ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0477 - val_loss: 0.1316 - lr: 0.0010 - 489ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0366 - val_loss: 0.4080 - lr: 0.0010 - 498ms/epoch - 9ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0405 - val_loss: 0.3991 - lr: 1.0000e-04 - 500ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0389 - val_loss: 0.3873 - lr: 1.0000e-04 - 541ms/epoch - 9ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0327 - val_loss: 0.3737 - lr: 1.0000e-04 - 505ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0310 - val_loss: 0.3594 - lr: 1.0000e-04 - 491ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0353 - val_loss: 0.3431 - lr: 1.0000e-04 - 474ms/epoch - 8ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0296 - val_loss: 0.3414 - lr: 1.0000e-05 - 505ms/epoch - 9ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0318 - val_loss: 0.3398 - lr: 1.0000e-05 - 483ms/epoch - 8ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0292 - val_loss: 0.3380 - lr: 1.0000e-05 - 514ms/epoch - 9ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0300 - val_loss: 0.3362 - lr: 1.0000e-05 - 527ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0284 - val_loss: 0.3343 - lr: 1.0000e-05 - 559ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0284 - val_loss: 0.3324 - lr: 1.0000e-05 - 521ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0289 - val_loss: 0.3305 - lr: 1.0000e-05 - 486ms/epoch - 8ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0283 - val_loss: 0.3286 - lr: 1.0000e-05 - 525ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0308 - val_loss: 0.3270 - lr: 1.0000e-05 - 487ms/epoch - 8ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0271 - val_loss: 0.3252 - lr: 1.0000e-05 - 491ms/epoch - 8ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0304 - val_loss: 0.3233 - lr: 1.0000e-05 - 457ms/epoch - 8ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0297 - val_loss: 0.3213 - lr: 1.0000e-05 - 465ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0288 - val_loss: 0.3194 - lr: 1.0000e-05 - 536ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0296 - val_loss: 0.3175 - lr: 1.0000e-05 - 521ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0299 - val_loss: 0.3156 - lr: 1.0000e-05 - 471ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0288 - val_loss: 0.3136 - lr: 1.0000e-05 - 469ms/epoch - 8ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0326 - val_loss: 0.3117 - lr: 1.0000e-05 - 470ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0294 - val_loss: 0.3094 - lr: 1.0000e-05 - 513ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0315 - val_loss: 0.3075 - lr: 1.0000e-05 - 461ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0284 - val_loss: 0.3050 - lr: 1.0000e-05 - 491ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0293 - val_loss: 0.3026 - lr: 1.0000e-05 - 532ms/epoch - 9ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0277 - val_loss: 0.3006 - lr: 1.0000e-05 - 474ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0311 - val_loss: 0.2982 - lr: 1.0000e-05 - 484ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0263 - val_loss: 0.2960 - lr: 1.0000e-05 - 457ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0280 - val_loss: 0.2939 - lr: 1.0000e-05 - 506ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0268 - val_loss: 0.2915 - lr: 1.0000e-05 - 501ms/epoch - 9ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0278 - val_loss: 0.2893 - lr: 1.0000e-05 - 509ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0280 - val_loss: 0.2876 - lr: 1.0000e-05 - 507ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0299 - val_loss: 0.2856 - lr: 1.0000e-05 - 569ms/epoch - 10ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0282 - val_loss: 0.2833 - lr: 1.0000e-05 - 470ms/epoch - 8ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0287 - val_loss: 0.2808 - lr: 1.0000e-05 - 478ms/epoch - 8ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0275 - val_loss: 0.2786 - lr: 1.0000e-05 - 464ms/epoch - 8ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0273 - val_loss: 0.2764 - lr: 1.0000e-05 - 500ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0299 - val_loss: 0.2741 - lr: 1.0000e-05 - 458ms/epoch - 8ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0291 - val_loss: 0.2718 - lr: 1.0000e-05 - 500ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0270 - val_loss: 0.2689 - lr: 1.0000e-05 - 477ms/epoch - 8ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0272 - val_loss: 0.2661 - lr: 1.0000e-05 - 476ms/epoch - 8ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01434 58/58 - 0s - loss: 0.0293 - val_loss: 0.2642 - lr: 1.0000e-05 - 491ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0285 - val_loss: 0.2622 - lr: 1.0000e-05 - 536ms/epoch - 9ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01434 58/58 - 1s - loss: 0.0268 - val_loss: 0.2603 - lr: 1.0000e-05 - 540ms/epoch - 9ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 28.603272766421263
RMSE: 5.348202760406646
MAPE: 4.3952252144553965
WMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 80.72598349686672
RMSE: 8.9847639644493
MAPE: 7.266216353433966
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 74.8946292448382
RMSE: 8.654168316183721
MAPE: 7.175854729849037
KAMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 23.77011386693893
RMSE: 4.87546037487117
MAPE: 3.900500517739451
MIDPOINT
Prediction vs Close: 48.88% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 53.557107319764015
RMSE: 7.318272153983071
MAPE: 6.3365268769325365
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.38 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.43 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.57 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.150 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 15:44:39 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_54 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_54 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.38569, saving model to LSTM1.h5 43/43 - 2s - loss: 0.3356 - val_loss: 0.3857 - lr: 0.0010 - 2s/epoch - 50ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.38569 to 0.08604, saving model to LSTM1.h5 43/43 - 0s - loss: 0.1240 - val_loss: 0.0860 - lr: 0.0010 - 374ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0693 - val_loss: 0.4426 - lr: 0.0010 - 350ms/epoch - 8ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0472 - val_loss: 0.1786 - lr: 0.0010 - 374ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0524 - val_loss: 0.1600 - lr: 0.0010 - 366ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0428 - val_loss: 0.1204 - lr: 0.0010 - 356ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0420 - val_loss: 0.1158 - lr: 0.0010 - 432ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0410 - val_loss: 0.1228 - lr: 1.0000e-04 - 388ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0408 - val_loss: 0.1227 - lr: 1.0000e-04 - 393ms/epoch - 9ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0361 - val_loss: 0.1239 - lr: 1.0000e-04 - 362ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0363 - val_loss: 0.1258 - lr: 1.0000e-04 - 347ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0383 - val_loss: 0.1278 - lr: 1.0000e-04 - 449ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0341 - val_loss: 0.1277 - lr: 1.0000e-05 - 393ms/epoch - 9ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0332 - val_loss: 0.1273 - lr: 1.0000e-05 - 407ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0342 - val_loss: 0.1272 - lr: 1.0000e-05 - 367ms/epoch - 9ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0372 - val_loss: 0.1268 - lr: 1.0000e-05 - 387ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0350 - val_loss: 0.1272 - lr: 1.0000e-05 - 375ms/epoch - 9ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0319 - val_loss: 0.1271 - lr: 1.0000e-05 - 392ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0359 - val_loss: 0.1269 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0331 - val_loss: 0.1269 - lr: 1.0000e-05 - 420ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0327 - val_loss: 0.1265 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0321 - val_loss: 0.1261 - lr: 1.0000e-05 - 424ms/epoch - 10ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0315 - val_loss: 0.1260 - lr: 1.0000e-05 - 377ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0332 - val_loss: 0.1252 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0325 - val_loss: 0.1243 - lr: 1.0000e-05 - 347ms/epoch - 8ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0336 - val_loss: 0.1244 - lr: 1.0000e-05 - 377ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0329 - val_loss: 0.1248 - lr: 1.0000e-05 - 350ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0349 - val_loss: 0.1242 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0342 - val_loss: 0.1239 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0329 - val_loss: 0.1240 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0342 - val_loss: 0.1237 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0331 - val_loss: 0.1236 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0362 - val_loss: 0.1229 - lr: 1.0000e-05 - 363ms/epoch - 8ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0321 - val_loss: 0.1221 - lr: 1.0000e-05 - 354ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0341 - val_loss: 0.1219 - lr: 1.0000e-05 - 398ms/epoch - 9ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0336 - val_loss: 0.1221 - lr: 1.0000e-05 - 338ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0313 - val_loss: 0.1233 - lr: 1.0000e-05 - 394ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0305 - val_loss: 0.1227 - lr: 1.0000e-05 - 407ms/epoch - 9ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0334 - val_loss: 0.1218 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0305 - val_loss: 0.1214 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0316 - val_loss: 0.1210 - lr: 1.0000e-05 - 361ms/epoch - 8ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0332 - val_loss: 0.1213 - lr: 1.0000e-05 - 458ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0319 - val_loss: 0.1218 - lr: 1.0000e-05 - 374ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0301 - val_loss: 0.1210 - lr: 1.0000e-05 - 380ms/epoch - 9ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0321 - val_loss: 0.1206 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0331 - val_loss: 0.1220 - lr: 1.0000e-05 - 390ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0297 - val_loss: 0.1226 - lr: 1.0000e-05 - 389ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0310 - val_loss: 0.1226 - lr: 1.0000e-05 - 368ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0328 - val_loss: 0.1227 - lr: 1.0000e-05 - 401ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0344 - val_loss: 0.1212 - lr: 1.0000e-05 - 392ms/epoch - 9ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0306 - val_loss: 0.1207 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.08604 43/43 - 0s - loss: 0.0334 - val_loss: 0.1188 - lr: 1.0000e-05 - 387ms/epoch - 9ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 29.509403666146007
RMSE: 5.432255854260365
MAPE: 4.5288133477558885
EMA
Prediction vs Close: 51.12% Accuracy
Prediction vs Prediction: 50.37% Accuracy
MSE: 28.603272766421263
RMSE: 5.348202760406646
MAPE: 4.3952252144553965
WMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 80.72598349686672
RMSE: 8.9847639644493
MAPE: 7.266216353433966
DEMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 74.8946292448382
RMSE: 8.654168316183721
MAPE: 7.175854729849037
KAMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 23.77011386693893
RMSE: 4.87546037487117
MAPE: 3.900500517739451
MIDPOINT
Prediction vs Close: 48.88% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 53.557107319764015
RMSE: 7.318272153983071
MAPE: 6.3365268769325365
T3
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 44.09609147416104
RMSE: 6.640488797834165
MAPE: 5.406095596816415
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.46 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.06 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.26 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.10 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.76 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.18 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.007 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 15:46:04 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
WARNING:tensorflow:Layer lstm_55 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_55 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.09097, saving model to LSTM1.h5 90/90 - 2s - loss: 0.1656 - val_loss: 0.0910 - lr: 0.0010 - 2s/epoch - 23ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0915 - val_loss: 0.5898 - lr: 0.0010 - 711ms/epoch - 8ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0636 - val_loss: 0.1137 - lr: 0.0010 - 679ms/epoch - 8ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0600 - val_loss: 0.1295 - lr: 0.0010 - 689ms/epoch - 8ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0556 - val_loss: 0.6111 - lr: 0.0010 - 743ms/epoch - 8ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0589 - val_loss: 0.4250 - lr: 0.0010 - 706ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0551 - val_loss: 0.4149 - lr: 1.0000e-04 - 690ms/epoch - 8ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0426 - val_loss: 0.3938 - lr: 1.0000e-04 - 716ms/epoch - 8ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0401 - val_loss: 0.3687 - lr: 1.0000e-04 - 690ms/epoch - 8ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0411 - val_loss: 0.3431 - lr: 1.0000e-04 - 692ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0357 - val_loss: 0.3219 - lr: 1.0000e-04 - 687ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0310 - val_loss: 0.3199 - lr: 1.0000e-05 - 723ms/epoch - 8ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0342 - val_loss: 0.3176 - lr: 1.0000e-05 - 719ms/epoch - 8ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0356 - val_loss: 0.3154 - lr: 1.0000e-05 - 771ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0364 - val_loss: 0.3131 - lr: 1.0000e-05 - 715ms/epoch - 8ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0366 - val_loss: 0.3111 - lr: 1.0000e-05 - 748ms/epoch - 8ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0359 - val_loss: 0.3093 - lr: 1.0000e-05 - 701ms/epoch - 8ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0313 - val_loss: 0.3064 - lr: 1.0000e-05 - 756ms/epoch - 8ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0336 - val_loss: 0.3035 - lr: 1.0000e-05 - 674ms/epoch - 7ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0368 - val_loss: 0.3007 - lr: 1.0000e-05 - 671ms/epoch - 7ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0358 - val_loss: 0.2980 - lr: 1.0000e-05 - 690ms/epoch - 8ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0337 - val_loss: 0.2950 - lr: 1.0000e-05 - 692ms/epoch - 8ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0341 - val_loss: 0.2921 - lr: 1.0000e-05 - 691ms/epoch - 8ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0359 - val_loss: 0.2898 - lr: 1.0000e-05 - 682ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0334 - val_loss: 0.2868 - lr: 1.0000e-05 - 726ms/epoch - 8ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0332 - val_loss: 0.2837 - lr: 1.0000e-05 - 719ms/epoch - 8ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0312 - val_loss: 0.2804 - lr: 1.0000e-05 - 720ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0331 - val_loss: 0.2768 - lr: 1.0000e-05 - 703ms/epoch - 8ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0308 - val_loss: 0.2736 - lr: 1.0000e-05 - 715ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0301 - val_loss: 0.2710 - lr: 1.0000e-05 - 777ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0317 - val_loss: 0.2682 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0358 - val_loss: 0.2650 - lr: 1.0000e-05 - 721ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0313 - val_loss: 0.2624 - lr: 1.0000e-05 - 729ms/epoch - 8ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0297 - val_loss: 0.2587 - lr: 1.0000e-05 - 681ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0312 - val_loss: 0.2557 - lr: 1.0000e-05 - 704ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0315 - val_loss: 0.2533 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0300 - val_loss: 0.2504 - lr: 1.0000e-05 - 710ms/epoch - 8ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0309 - val_loss: 0.2474 - lr: 1.0000e-05 - 702ms/epoch - 8ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0303 - val_loss: 0.2447 - lr: 1.0000e-05 - 694ms/epoch - 8ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0328 - val_loss: 0.2420 - lr: 1.0000e-05 - 754ms/epoch - 8ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0285 - val_loss: 0.2388 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0309 - val_loss: 0.2357 - lr: 1.0000e-05 - 697ms/epoch - 8ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0290 - val_loss: 0.2329 - lr: 1.0000e-05 - 672ms/epoch - 7ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0290 - val_loss: 0.2306 - lr: 1.0000e-05 - 770ms/epoch - 9ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0279 - val_loss: 0.2281 - lr: 1.0000e-05 - 743ms/epoch - 8ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0310 - val_loss: 0.2255 - lr: 1.0000e-05 - 707ms/epoch - 8ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0298 - val_loss: 0.2221 - lr: 1.0000e-05 - 685ms/epoch - 8ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0299 - val_loss: 0.2197 - lr: 1.0000e-05 - 689ms/epoch - 8ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0311 - val_loss: 0.2173 - lr: 1.0000e-05 - 705ms/epoch - 8ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0267 - val_loss: 0.2149 - lr: 1.0000e-05 - 759ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.09097 90/90 - 1s - loss: 0.0290 - val_loss: 0.2122 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step Epoch 00051: early stopping
SMA Prediction vs Close: 49.63% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 29.509403666146007 RMSE: 5.432255854260365 MAPE: 4.5288133477558885 EMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 50.37% Accuracy MSE: 28.603272766421263 RMSE: 5.348202760406646 MAPE: 4.3952252144553965 WMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 80.72598349686672 RMSE: 8.9847639644493 MAPE: 7.266216353433966 DEMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 46.27% Accuracy MSE: 74.8946292448382 RMSE: 8.654168316183721 MAPE: 7.175854729849037 KAMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 45.9% Accuracy MSE: 23.77011386693893 RMSE: 4.87546037487117 MAPE: 3.900500517739451 MIDPOINT Prediction vs Close: 48.88% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 53.557107319764015 RMSE: 7.318272153983071 MAPE: 6.3365268769325365 T3 Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 44.09609147416104 RMSE: 6.640488797834165 MAPE: 5.406095596816415 TEMA Prediction vs Close: 45.52% Accuracy Prediction vs Prediction: 49.63% Accuracy MSE: 9.564405293897392 RMSE: 3.0926372716336124 MAPE: 2.44888799215368 Runtime: mins: 12.028950116466664
from google.colab import files
import cv2
uploaded = files.upload()
img = cv2.imread('Experiment1.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5ea520ad0>
Excess kurtosis is a metric that compares the kurtosis of a distribution against the kurtosis of a normal distribution. The kurtosis of a normal distribution equals 3. Therefore, the excess kurtosis is found using the formula below:
Excess Kurtosis = Kurtosis – 3
np.save("X_train_appl.npy", X_train)
np.save("y_train_appl.npy", y_train)
np.save("X_test_appl.npy", X_test)
np.save("y_test_appl.npy", y_test)
np.save("yc_train_appl.npy", yc_train)
np.save("yc_test_appl.npy", yc_test)
np.save('index_train_appl.npy', index_train)
np.save('index_test_appl.npy', index_test)
list(simulation1.keys())
['SMA', 'EMA', 'WMA', 'DEMA', 'KAMA', 'MIDPOINT', 'T3', 'TEMA']
for i in range(len(list(simulation1.keys()))):
SIM = list(simulation1.keys())[i]
plot_train(simulation1,SIM)
plot_test(simulation1,SIM)
----- Train RMSE for SMA ----- 7.895573621826288 ----- Train_MSE_LSTM for SMA ----- 62.34008281767908 ----- Train MAE LSTM for SMA ----- 6.764936516294256
----- Test RMSE for SMA----- 5.432255854260365 ----- Test_MSE_LSTM for SMA----- 29.509403666146007 ----- Test_MAE_LSTM for SMA----- 4.5288133477558885
----- Train RMSE for EMA ----- 8.87769382903084 ----- Train_MSE_LSTM for EMA ----- 78.81344772201226 ----- Train MAE LSTM for EMA ----- 7.790667497096491
----- Test RMSE for EMA----- 5.348202760406646 ----- Test_MSE_LSTM for EMA----- 28.603272766421263 ----- Test_MAE_LSTM for EMA----- 4.3952252144553965
----- Train RMSE for WMA ----- 9.703993025462333 ----- Train_MSE_LSTM for WMA ----- 94.16748063822159 ----- Train MAE LSTM for WMA ----- 8.628062737580057
----- Test RMSE for WMA----- 8.9847639644493 ----- Test_MSE_LSTM for WMA----- 80.72598349686672 ----- Test_MAE_LSTM for WMA----- 7.266216353433966
----- Train RMSE for DEMA ----- 11.240117396029953 ----- Train_MSE_LSTM for DEMA ----- 126.3402390765352 ----- Train MAE LSTM for DEMA ----- 10.016611697382782
----- Test RMSE for DEMA----- 8.654168316183721 ----- Test_MSE_LSTM for DEMA----- 74.8946292448382 ----- Test_MAE_LSTM for DEMA----- 7.175854729849037
----- Train RMSE for KAMA ----- 9.358621140016876 ----- Train_MSE_LSTM for KAMA ----- 87.58378964237077 ----- Train MAE LSTM for KAMA ----- 8.463473867850913
----- Test RMSE for KAMA----- 4.87546037487117 ----- Test_MSE_LSTM for KAMA----- 23.77011386693893 ----- Test_MAE_LSTM for KAMA----- 3.900500517739451
----- Train RMSE for MIDPOINT ----- 8.349312336425948 ----- Train_MSE_LSTM for MIDPOINT ----- 69.71101649119451 ----- Train MAE LSTM for MIDPOINT ----- 7.472949316300968
----- Test RMSE for MIDPOINT----- 7.318272153983071 ----- Test_MSE_LSTM for MIDPOINT----- 53.557107319764015 ----- Test_MAE_LSTM for MIDPOINT----- 6.3365268769325365
----- Train RMSE for T3 ----- 10.91415603392602 ----- Train_MSE_LSTM for T3 ----- 119.11880193288376 ----- Train MAE LSTM for T3 ----- 9.801440406699038
----- Test RMSE for T3----- 6.640488797834165 ----- Test_MSE_LSTM for T3----- 44.09609147416104 ----- Test_MAE_LSTM for T3----- 5.406095596816415
----- Train RMSE for TEMA ----- 6.523327083998898 ----- Train_MSE_LSTM for TEMA ----- 42.55379624483356 ----- Train MAE LSTM for TEMA ----- 4.644936803645353
----- Test RMSE for TEMA----- 3.0926372716336124 ----- Test_MSE_LSTM for TEMA----- 9.564405293897392 ----- Test_MAE_LSTM for TEMA----- 2.44888799215368
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# # Option 1
# # Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# option 2
model = Sequential()
model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
model.add(Dense(64))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(learning_rate = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM2.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation2 = {}
imgfile = 'Experiment2'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation2[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation2_data.json', 'w') as fp:
json.dump(simulation2, fp)
for ma in simulation2.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation2[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation2[ma]['final']['mse'],
'\nRMSE:\t', simulation2[ma]['final']['rmse'],
'\nMAPE:\t', simulation2[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.49 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.20 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.78 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.78 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.23 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.721 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 14:39:19 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.07969, saving model to LSTM2.h5
48/48 - 5s - loss: 0.1695 - accuracy: 0.0000e+00 - val_loss: 0.0797 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 5s/epoch - 107ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.07969 to 0.01348, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0739 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 0.0010 - 277ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01348
48/48 - 0s - loss: 0.0279 - accuracy: 0.0000e+00 - val_loss: 0.0797 - val_accuracy: 0.0037 - lr: 0.0010 - 275ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01348
48/48 - 0s - loss: 0.0309 - accuracy: 0.0000e+00 - val_loss: 0.0415 - val_accuracy: 0.0037 - lr: 0.0010 - 264ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01348
48/48 - 0s - loss: 0.0315 - accuracy: 0.0000e+00 - val_loss: 0.0942 - val_accuracy: 0.0037 - lr: 0.0010 - 302ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.01348
48/48 - 0s - loss: 0.0188 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 0.0010 - 247ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00007: val_loss did not improve from 0.01348
48/48 - 0s - loss: 0.0113 - accuracy: 0.0000e+00 - val_loss: 0.0192 - val_accuracy: 0.0037 - lr: 0.0010 - 270ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.01348 to 0.00813, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0084 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 307ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.00813 to 0.00673, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 299ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.00673 to 0.00586, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 263ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.00586 to 0.00530, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 307ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.00530 to 0.00495, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 287ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.00495 to 0.00474, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 268ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.00474 to 0.00460, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 290ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.00460 to 0.00452, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 302ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.00452 to 0.00446, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 297ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.00446 to 0.00442, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 263ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.00442 to 0.00440, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 304ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.00440 to 0.00438, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 295ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.00438 to 0.00438, saving model to LSTM2.h5
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 310ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00021: val_loss did not improve from 0.00438
48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 294ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.5672e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.5419e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.5241e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.5093e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00026: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4956e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4821e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4686e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4549e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4411e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4272e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.4131e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3989e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3845e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3700e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3554e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3407e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3259e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.3110e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2961e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2810e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2506e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2353e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2200e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.2045e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.1890e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.1735e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.1578e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.1421e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.1264e-04 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.1105e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.0947e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.0787e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.0627e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.0466e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.0305e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00438
48/48 - 0s - loss: 9.0143e-04 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.9980e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.9817e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.9653e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.9488e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.9323e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.9156e-04 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 6ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.8989e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.8821e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.8652e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.8483e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.8312e-04 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.00438
48/48 - 0s - loss: 8.8141e-04 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step
Epoch 00070: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.40 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.30 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.84 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.62 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.20 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.590 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 14:41:12 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.11100, saving model to LSTM2.h5
16/16 - 3s - loss: 0.1653 - accuracy: 0.0000e+00 - val_loss: 0.1110 - val_accuracy: 0.0037 - lr: 0.0010 - 3s/epoch - 214ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.11100 to 0.06946, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0510 - accuracy: 0.0000e+00 - val_loss: 0.0695 - val_accuracy: 0.0037 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.06946 to 0.00950, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0063 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.00950 to 0.00842, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.00842 to 0.00799, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0067 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00799
16/16 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 106ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00799
16/16 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00799
16/16 - 0s - loss: 0.0063 - accuracy: 0.0000e+00 - val_loss: 0.0208 - val_accuracy: 0.0037 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.00799
16/16 - 0s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00010: val_loss did not improve from 0.00799
16/16 - 0s - loss: 0.0109 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.00799 to 0.00723, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0160 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 125ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.00723 to 0.00560, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0062 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 124ms/epoch - 8ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.00560 to 0.00528, saving model to LSTM2.h5
16/16 - 0s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 116ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 121ms/epoch - 8ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 114ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 97ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 100ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00018: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00023: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00528
16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 61.82384712230415
RMSE: 7.862814198638052
MAPE: 6.504666247736678
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.40 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.23 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.20 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.45 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.18 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.686 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 14:42:30 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.20292, saving model to LSTM2.h5
17/17 - 4s - loss: 0.1103 - accuracy: 0.0000e+00 - val_loss: 0.2029 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 210ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.20292 to 0.11069, saving model to LSTM2.h5
17/17 - 0s - loss: 0.1639 - accuracy: 0.0000e+00 - val_loss: 0.1107 - val_accuracy: 0.0037 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.11069 to 0.05673, saving model to LSTM2.h5
17/17 - 0s - loss: 0.1197 - accuracy: 0.0000e+00 - val_loss: 0.0567 - val_accuracy: 0.0037 - lr: 0.0010 - 129ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05673
17/17 - 0s - loss: 0.0248 - accuracy: 0.0000e+00 - val_loss: 0.0785 - val_accuracy: 0.0037 - lr: 0.0010 - 110ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.05673 to 0.00858, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0488 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 0.0010 - 128ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00858
17/17 - 0s - loss: 0.0060 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 0.0010 - 105ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00858
17/17 - 0s - loss: 0.0107 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00858
17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.00858 to 0.00558, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 143ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00558
17/17 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 108ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00558
17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 0.0010 - 116ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00558
17/17 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.00558 to 0.00475, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 135ms/epoch - 8ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00475
17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 0.0010 - 110ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00475
17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00475
17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00475
17/17 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 111ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.00475 to 0.00452, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 0.0010 - 130ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.00452 to 0.00443, saving model to LSTM2.h5
17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 0.0010 - 142ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 0.0010 - 112ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 0.0010 - 110ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00023: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 0.0010 - 105ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 106ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00443
17/17 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 118ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00443
17/17 - 0s - loss: 9.7258e-04 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 107ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00028: val_loss did not improve from 0.00443
17/17 - 0s - loss: 9.1754e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 115ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.8722e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.8510e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.8283e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.8068e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00033: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.7866e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.7673e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.7488e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.7309e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.7134e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6964e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6797e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6634e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6474e-04 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6318e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6164e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.6014e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 124ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5867e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 122ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5723e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5582e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5309e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5177e-04 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.5048e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4922e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4798e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4677e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4559e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4331e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4221e-04 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4113e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.4007e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3904e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3803e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3704e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3607e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3512e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3420e-04 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3329e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3240e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.00443
17/17 - 0s - loss: 8.3153e-04 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 00069: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 61.82384712230415
RMSE: 7.862814198638052
MAPE: 6.504666247736678
WMA
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 78.06346997131263
RMSE: 8.835353415190172
MAPE: 6.948265794170055
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.43 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.92 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.91 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.18 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.028 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 14:43:51 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04427, saving model to LSTM2.h5
10/10 - 4s - loss: 0.2331 - accuracy: 0.0000e+00 - val_loss: 0.0443 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 388ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.04427 to 0.02873, saving model to LSTM2.h5
10/10 - 0s - loss: 0.1760 - accuracy: 0.0000e+00 - val_loss: 0.0287 - val_accuracy: 0.0037 - lr: 0.0010 - 93ms/epoch - 9ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.02873
10/10 - 0s - loss: 0.0484 - accuracy: 0.0000e+00 - val_loss: 0.0752 - val_accuracy: 0.0037 - lr: 0.0010 - 83ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.02873
10/10 - 0s - loss: 0.0095 - accuracy: 0.0000e+00 - val_loss: 0.0313 - val_accuracy: 0.0037 - lr: 0.0010 - 79ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.02873 to 0.01131, saving model to LSTM2.h5
10/10 - 0s - loss: 0.0045 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 0.0010 - 94ms/epoch - 9ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01131 to 0.01100, saving model to LSTM2.h5
10/10 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 0.0010 - 95ms/epoch - 10ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0209 - val_accuracy: 0.0037 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 0.0010 - 75ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00011: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 79ms/epoch - 8ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0161 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 96ms/epoch - 10ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0163 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 100ms/epoch - 10ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0165 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 78ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00016: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 82ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00021: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 84ms/epoch - 8ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0167 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 91ms/epoch - 9ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0168 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01100
10/10 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0170 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 00056: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 61.82384712230415
RMSE: 7.862814198638052
MAPE: 6.504666247736678
WMA
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 78.06346997131263
RMSE: 8.835353415190172
MAPE: 6.948265794170055
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 153.59400995187858
RMSE: 12.39330504554288
MAPE: 11.203775482220726
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.38 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.27 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.15 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.73 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.22 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.992 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 14:45:03 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.10383, saving model to LSTM2.h5
45/45 - 4s - loss: 0.1770 - accuracy: 0.0000e+00 - val_loss: 0.1038 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 90ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.10383 to 0.01943, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0618 - accuracy: 0.0000e+00 - val_loss: 0.0194 - val_accuracy: 0.0037 - lr: 0.0010 - 266ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01943
45/45 - 0s - loss: 0.0545 - accuracy: 0.0000e+00 - val_loss: 0.2219 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01943
45/45 - 0s - loss: 0.0658 - accuracy: 0.0000e+00 - val_loss: 0.0253 - val_accuracy: 0.0037 - lr: 0.0010 - 240ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01943
45/45 - 0s - loss: 0.0312 - accuracy: 0.0000e+00 - val_loss: 0.1697 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 274ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.01943
45/45 - 0s - loss: 0.0233 - accuracy: 0.0000e+00 - val_loss: 0.0298 - val_accuracy: 0.0037 - lr: 0.0010 - 258ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.01943 to 0.01815, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0102 - accuracy: 0.0000e+00 - val_loss: 0.0181 - val_accuracy: 0.0037 - lr: 0.0010 - 287ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.01815
45/45 - 0s - loss: 0.0061 - accuracy: 0.0000e+00 - val_loss: 0.0275 - val_accuracy: 0.0037 - lr: 0.0010 - 259ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.01815 to 0.00734, saving model to LSTM2.h5
45/45 - 0s - loss: 0.0049 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 0.0010 - 291ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0410 - val_accuracy: 0.0037 - lr: 0.0010 - 261ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 0.0010 - 295ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0504 - val_accuracy: 0.0037 - lr: 0.0010 - 236ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0254 - val_accuracy: 0.0037 - lr: 0.0010 - 244ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00014: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0518 - val_accuracy: 0.0037 - lr: 0.0010 - 318ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0058 - accuracy: 0.0000e+00 - val_loss: 0.0374 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 270ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0258 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 253ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0207 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 282ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0179 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 285ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00019: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 278ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0154 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0152 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00024: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0149 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0145 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0143 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0141 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0139 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0137 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0134 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0129 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0125 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0123 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0123 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00734
45/45 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 00059: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 61.82384712230415
RMSE: 7.862814198638052
MAPE: 6.504666247736678
WMA
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 78.06346997131263
RMSE: 8.835353415190172
MAPE: 6.948265794170055
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 153.59400995187858
RMSE: 12.39330504554288
MAPE: 11.203775482220726
KAMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 121.28941541171922
RMSE: 11.013147388994629
MAPE: 9.175643045864026
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.26 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.27 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.85 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.20 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.168 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 14:46:33 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.13473, saving model to LSTM2.h5
58/58 - 4s - loss: 0.1760 - accuracy: 0.0000e+00 - val_loss: 0.1347 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 64ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.13473 to 0.00527, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0557 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 0.0010 - 334ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.00527
58/58 - 0s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 0.0010 - 329ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.00527 to 0.00478, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 0.0010 - 393ms/epoch - 7ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.00478 to 0.00472, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 0.0010 - 375ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0054 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 333ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0097 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 0.0010 - 307ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0255 - accuracy: 0.0000e+00 - val_loss: 0.0556 - val_accuracy: 0.0037 - lr: 0.0010 - 308ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00009: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0276 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 0.0010 - 320ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0433 - accuracy: 0.0000e+00 - val_loss: 0.0879 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 328ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0147 - accuracy: 0.0000e+00 - val_loss: 0.0407 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 308ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0101 - accuracy: 0.0000e+00 - val_loss: 0.0224 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 346ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0081 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 334ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00014: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0066 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 340ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0045 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0043 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 338ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00019: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0041 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0040 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 335ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00472
58/58 - 0s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.00472 to 0.00467, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss improved from 0.00467 to 0.00457, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss improved from 0.00457 to 0.00448, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 333ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.00448 to 0.00441, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss improved from 0.00441 to 0.00435, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 333ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss improved from 0.00435 to 0.00430, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 360ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss improved from 0.00430 to 0.00427, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 396ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss improved from 0.00427 to 0.00425, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss improved from 0.00425 to 0.00424, saving model to LSTM2.h5
58/58 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 352ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0045 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 348ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0047 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 340ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 349ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 356ms/epoch - 6ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 376ms/epoch - 6ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 356ms/epoch - 6ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 6ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 5ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 328ms/epoch - 6ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 6ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 85/500
Epoch 00085: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 388ms/epoch - 7ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.00424
58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 327ms/epoch - 6ms/step
Epoch 00086: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 61.82384712230415
RMSE: 7.862814198638052
MAPE: 6.504666247736678
WMA
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 78.06346997131263
RMSE: 8.835353415190172
MAPE: 6.948265794170055
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 153.59400995187858
RMSE: 12.39330504554288
MAPE: 11.203775482220726
KAMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 121.28941541171922
RMSE: 11.013147388994629
MAPE: 9.175643045864026
MIDPOINT
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 110.09412594018622
RMSE: 10.492574800314088
MAPE: 8.796456301428389
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.41 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.59 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.160 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 14:48:17 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.03602, saving model to LSTM2.h5
43/43 - 4s - loss: 0.1500 - accuracy: 0.0000e+00 - val_loss: 0.0360 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 84ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.03602 to 0.01139, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0309 - accuracy: 0.0000e+00 - val_loss: 0.0114 - val_accuracy: 0.0037 - lr: 0.0010 - 274ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.01139
43/43 - 0s - loss: 0.0364 - accuracy: 0.0000e+00 - val_loss: 0.0897 - val_accuracy: 0.0037 - lr: 0.0010 - 262ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01139
43/43 - 0s - loss: 0.0438 - accuracy: 0.0000e+00 - val_loss: 0.0251 - val_accuracy: 0.0037 - lr: 0.0010 - 252ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01139
43/43 - 0s - loss: 0.0158 - accuracy: 0.0000e+00 - val_loss: 0.0613 - val_accuracy: 0.0037 - lr: 0.0010 - 274ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.01139
43/43 - 0s - loss: 0.0117 - accuracy: 0.0000e+00 - val_loss: 0.0312 - val_accuracy: 0.0037 - lr: 0.0010 - 281ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00007: val_loss did not improve from 0.01139
43/43 - 0s - loss: 0.0083 - accuracy: 0.0000e+00 - val_loss: 0.0216 - val_accuracy: 0.0037 - lr: 0.0010 - 255ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.01139 to 0.00638, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0146 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.00638 to 0.00515, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 265ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.00515 to 0.00491, saving model to LSTM2.h5
43/43 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 319ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 236ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 277ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 291ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 237ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00015: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 236ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00020: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00491
43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 00060: early stopping
SMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 75.20110458138421
RMSE: 8.67185704341257
MAPE: 7.0799160587584336
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 44.78% Accuracy
MSE: 61.82384712230415
RMSE: 7.862814198638052
MAPE: 6.504666247736678
WMA
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 78.06346997131263
RMSE: 8.835353415190172
MAPE: 6.948265794170055
DEMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 153.59400995187858
RMSE: 12.39330504554288
MAPE: 11.203775482220726
KAMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 121.28941541171922
RMSE: 11.013147388994629
MAPE: 9.175643045864026
MIDPOINT
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 110.09412594018622
RMSE: 10.492574800314088
MAPE: 8.796456301428389
T3
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 225.73907153615718
RMSE: 15.024615520410403
MAPE: 12.611725131734374
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.44 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.29 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.16 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.77 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.078 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 14:49:45 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.32785, saving model to LSTM2.h5
90/90 - 4s - loss: 0.0549 - accuracy: 0.0000e+00 - val_loss: 0.3279 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 4s/epoch - 45ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.32785 to 0.05194, saving model to LSTM2.h5
90/90 - 1s - loss: 0.0650 - accuracy: 0.0000e+00 - val_loss: 0.0519 - val_accuracy: 0.0037 - lr: 0.0010 - 562ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05194
90/90 - 0s - loss: 0.0748 - accuracy: 0.0000e+00 - val_loss: 0.0536 - val_accuracy: 0.0037 - lr: 0.0010 - 489ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.05194 to 0.00511, saving model to LSTM2.h5
90/90 - 0s - loss: 0.0452 - accuracy: 0.0000e+00 - val_loss: 0.0051 - val_accuracy: 0.0037 - lr: 0.0010 - 482ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0236 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 0.0010 - 561ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0182 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 0.0010 - 492ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0196 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 0.0010 - 499ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0200 - accuracy: 0.0000e+00 - val_loss: 0.0336 - val_accuracy: 0.0037 - lr: 0.0010 - 539ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00009: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0246 - accuracy: 0.0000e+00 - val_loss: 0.0255 - val_accuracy: 0.0037 - lr: 0.0010 - 567ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0381 - accuracy: 0.0000e+00 - val_loss: 0.0348 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 482ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0083 - accuracy: 0.0000e+00 - val_loss: 0.0203 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 476ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0068 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 501ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0057 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 475ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00014: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 489ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.0118 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 522ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0035 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 491ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0034 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 565ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0033 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 499ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00019: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0032 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 572ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0031 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 535ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 558ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 465ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 509ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 455ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 507ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 486ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0027 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 581ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 465ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 557ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 473ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 516ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 457ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 504ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 535ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 575ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 467ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 491ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 488ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 467ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 491ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 468ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 521ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 550ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 473ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 490ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 579ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 472ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 523ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 488ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.00511
90/90 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 472ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.00511
90/90 - 1s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 00054: early stopping
SMA Prediction vs Close: 53.73% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 75.20110458138421 RMSE: 8.67185704341257 MAPE: 7.0799160587584336 EMA Prediction vs Close: 53.73% Accuracy Prediction vs Prediction: 44.78% Accuracy MSE: 61.82384712230415 RMSE: 7.862814198638052 MAPE: 6.504666247736678 WMA Prediction vs Close: 56.34% Accuracy Prediction vs Prediction: 45.52% Accuracy MSE: 78.06346997131263 RMSE: 8.835353415190172 MAPE: 6.948265794170055 DEMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 153.59400995187858 RMSE: 12.39330504554288 MAPE: 11.203775482220726 KAMA Prediction vs Close: 52.61% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 121.28941541171922 RMSE: 11.013147388994629 MAPE: 9.175643045864026 MIDPOINT Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 45.15% Accuracy MSE: 110.09412594018622 RMSE: 10.492574800314088 MAPE: 8.796456301428389 T3 Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 49.63% Accuracy MSE: 225.73907153615718 RMSE: 15.024615520410403 MAPE: 12.611725131734374 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 157.04980120863047 RMSE: 12.531951213144364 MAPE: 11.294114614846999 Runtime: mins: 12.02286981541666
from google.colab import files
import cv2
uploaded = files.upload()
img = cv2.imread('Experiment2.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5d05a45d0>
for i in range(len(list(simulation2.keys()))):
SIM = list(simulation2.keys())[i]
plot_train(simulation2,SIM)
plot_test(simulation2,SIM)
----- Train RMSE for SMA ----- 8.871298903186338 ----- Train_MSE_LSTM for SMA ----- 78.69994422967511 ----- Train MAE LSTM for SMA ----- 7.764003126570808
----- Test RMSE for SMA----- 8.67185704341257 ----- Test_MSE_LSTM for SMA----- 75.20110458138421 ----- Test_MAE_LSTM for SMA----- 7.0799160587584336
----- Train RMSE for EMA ----- 10.179800509177445 ----- Train_MSE_LSTM for EMA ----- 103.62833840664938 ----- Train MAE LSTM for EMA ----- 8.952723233047472
----- Test RMSE for EMA----- 7.862814198638052 ----- Test_MSE_LSTM for EMA----- 61.82384712230415 ----- Test_MAE_LSTM for EMA----- 6.504666247736678
----- Train RMSE for WMA ----- 10.496291718330035 ----- Train_MSE_LSTM for WMA ----- 110.17213983628366 ----- Train MAE LSTM for WMA ----- 9.343427695739683
----- Test RMSE for WMA----- 8.835353415190172 ----- Test_MSE_LSTM for WMA----- 78.06346997131263 ----- Test_MAE_LSTM for WMA----- 6.948265794170055
----- Train RMSE for DEMA ----- 12.115569265976841 ----- Train_MSE_LSTM for DEMA ----- 146.7870186386826 ----- Train MAE LSTM for DEMA ----- 10.916550872639965
----- Test RMSE for DEMA----- 12.39330504554288 ----- Test_MSE_LSTM for DEMA----- 153.59400995187858 ----- Test_MAE_LSTM for DEMA----- 11.203775482220726
----- Train RMSE for KAMA ----- 10.558696520648454 ----- Train_MSE_LSTM for KAMA ----- 111.48607221515375 ----- Train MAE LSTM for KAMA ----- 9.496371790900133
----- Test RMSE for KAMA----- 11.013147388994629 ----- Test_MSE_LSTM for KAMA----- 121.28941541171922 ----- Test_MAE_LSTM for KAMA----- 9.175643045864026
----- Train RMSE for MIDPOINT ----- 9.458884562735138 ----- Train_MSE_LSTM for MIDPOINT ----- 89.47049717114909 ----- Train MAE LSTM for MIDPOINT ----- 8.375965095480147
----- Test RMSE for MIDPOINT----- 10.492574800314088 ----- Test_MSE_LSTM for MIDPOINT----- 110.09412594018622 ----- Test_MAE_LSTM for MIDPOINT----- 8.796456301428389
----- Train RMSE for T3 ----- 12.046185882632866 ----- Train_MSE_LSTM for T3 ----- 145.11059431894336 ----- Train MAE LSTM for T3 ----- 10.824884583380554
----- Test RMSE for T3----- 15.024615520410403 ----- Test_MSE_LSTM for T3----- 225.73907153615718 ----- Test_MAE_LSTM for T3----- 12.611725131734374
----- Train RMSE for TEMA ----- 7.429673273539691 ----- Train_MSE_LSTM for TEMA ----- 55.20004495154998 ----- Train MAE LSTM for TEMA ----- 5.060762752805627
----- Test RMSE for TEMA----- 12.531951213144364 ----- Test_MSE_LSTM for TEMA----- 157.04980120863047 ----- Test_MAE_LSTM for TEMA----- 11.294114614846999
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# # Option 1
# # Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
class Double_Tanh(Activation):
def __init__(self, activation, **kwargs):
super(Double_Tanh, self).__init__(activation, **kwargs)
self.__name__ = 'double_tanh'
def double_tanh(x):
return (K.tanh(x) * 2)
get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# Model Generation
model = Sequential()
#check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
model.add(Dense(1))
model.add(Activation(double_tanh))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM3.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation3 = {}
imgfile = 'Experiment3'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation3[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation3_data.json', 'w') as fp:
json.dump(simulation3, fp)
for ma in simulation3.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation3[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation3[ma]['final']['mse'],
'\nRMSE:\t', simulation3[ma]['final']['rmse'],
'\nMAPE:\t', simulation3[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.51 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.19 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.08 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.77 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.82 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.22 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.752 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 14:57:27 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.32523, saving model to LSTM3.h5
48/48 - 2s - loss: 0.2060 - mse: 0.2060 - mae: 0.3352 - val_loss: 0.3252 - val_mse: 0.3252 - val_mae: 0.5327 - lr: 0.0010 - 2s/epoch - 44ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.32523
48/48 - 0s - loss: 0.0738 - mse: 0.0738 - mae: 0.2252 - val_loss: 0.3958 - val_mse: 0.3958 - val_mae: 0.5963 - lr: 0.0010 - 193ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.32523 to 0.31040, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0327 - mse: 0.0327 - mae: 0.1433 - val_loss: 0.3104 - val_mse: 0.3104 - val_mae: 0.5247 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.31040
48/48 - 0s - loss: 0.0236 - mse: 0.0236 - mae: 0.1211 - val_loss: 0.3787 - val_mse: 0.3787 - val_mae: 0.5870 - lr: 0.0010 - 212ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.31040
48/48 - 0s - loss: 0.0182 - mse: 0.0182 - mae: 0.1046 - val_loss: 0.3283 - val_mse: 0.3283 - val_mae: 0.5451 - lr: 0.0010 - 229ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.31040
48/48 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0938 - val_loss: 0.3362 - val_mse: 0.3362 - val_mae: 0.5529 - lr: 0.0010 - 211ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.31040 to 0.27120, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0880 - val_loss: 0.2712 - val_mse: 0.2712 - val_mae: 0.4936 - lr: 0.0010 - 260ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.27120
48/48 - 0s - loss: 0.0110 - mse: 0.0110 - mae: 0.0823 - val_loss: 0.3407 - val_mse: 0.3407 - val_mae: 0.5586 - lr: 0.0010 - 230ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.27120 to 0.24048, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0812 - val_loss: 0.2405 - val_mse: 0.2405 - val_mae: 0.4655 - lr: 0.0010 - 211ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.24048
48/48 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0728 - val_loss: 0.3159 - val_mse: 0.3159 - val_mae: 0.5381 - lr: 0.0010 - 219ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.24048 to 0.21784, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0801 - val_loss: 0.2178 - val_mse: 0.2178 - val_mae: 0.4433 - lr: 0.0010 - 218ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.21784
48/48 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0702 - val_loss: 0.3228 - val_mse: 0.3228 - val_mae: 0.5467 - lr: 0.0010 - 228ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.21784 to 0.16075, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0751 - val_loss: 0.1608 - val_mse: 0.1608 - val_mae: 0.3764 - lr: 0.0010 - 248ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.16075
48/48 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0728 - val_loss: 0.3245 - val_mse: 0.3245 - val_mae: 0.5497 - lr: 0.0010 - 215ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.16075 to 0.10621, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0830 - val_loss: 0.1062 - val_mse: 0.1062 - val_mae: 0.2995 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.10621
48/48 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0872 - val_loss: 0.4383 - val_mse: 0.4383 - val_mae: 0.6444 - lr: 0.0010 - 210ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.10621 to 0.03004, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0200 - mse: 0.0200 - mae: 0.1034 - val_loss: 0.0300 - val_mse: 0.0300 - val_mae: 0.1374 - lr: 0.0010 - 254ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.03004
48/48 - 0s - loss: 0.0230 - mse: 0.0230 - mae: 0.1172 - val_loss: 0.4537 - val_mse: 0.4537 - val_mae: 0.6558 - lr: 0.0010 - 224ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.03004 to 0.02569, saving model to LSTM3.h5
48/48 - 0s - loss: 0.0254 - mse: 0.0254 - mae: 0.1215 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1258 - lr: 0.0010 - 237ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0249 - mse: 0.0249 - mae: 0.1271 - val_loss: 0.2785 - val_mse: 0.2785 - val_mae: 0.5097 - lr: 0.0010 - 242ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0161 - mse: 0.0161 - mae: 0.0976 - val_loss: 0.0512 - val_mse: 0.0512 - val_mae: 0.2018 - lr: 0.0010 - 247ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0134 - mse: 0.0134 - mae: 0.0941 - val_loss: 0.2064 - val_mse: 0.2064 - val_mae: 0.4377 - lr: 0.0010 - 226ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0746 - val_loss: 0.0759 - val_mse: 0.0759 - val_mae: 0.2555 - lr: 0.0010 - 219ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00024: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0760 - val_loss: 0.1579 - val_mse: 0.1579 - val_mae: 0.3803 - lr: 0.0010 - 277ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0805 - val_loss: 0.1393 - val_mse: 0.1393 - val_mae: 0.3563 - lr: 1.0000e-04 - 213ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.1345 - val_mse: 0.1345 - val_mae: 0.3497 - lr: 1.0000e-04 - 231ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0585 - val_loss: 0.1325 - val_mse: 0.1325 - val_mae: 0.3469 - lr: 1.0000e-04 - 190ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.1280 - val_mse: 0.1280 - val_mae: 0.3404 - lr: 1.0000e-04 - 265ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00029: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.1241 - val_mse: 0.1241 - val_mae: 0.3348 - lr: 1.0000e-04 - 234ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3350 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3350 - lr: 1.0000e-05 - 203ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0536 - val_loss: 0.1243 - val_mse: 0.1243 - val_mae: 0.3352 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0526 - val_loss: 0.1242 - val_mse: 0.1242 - val_mae: 0.3349 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00034: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0525 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3346 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3344 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0520 - val_loss: 0.1239 - val_mse: 0.1239 - val_mae: 0.3345 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0542 - val_loss: 0.1234 - val_mse: 0.1234 - val_mae: 0.3338 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.1232 - val_mse: 0.1232 - val_mae: 0.3336 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.1230 - val_mse: 0.1230 - val_mae: 0.3333 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0523 - val_loss: 0.1227 - val_mse: 0.1227 - val_mae: 0.3328 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0552 - val_loss: 0.1222 - val_mse: 0.1222 - val_mae: 0.3321 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.1219 - val_mse: 0.1219 - val_mae: 0.3316 - lr: 1.0000e-05 - 203ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.1215 - val_mse: 0.1215 - val_mae: 0.3310 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3310 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0521 - val_loss: 0.1215 - val_mse: 0.1215 - val_mae: 0.3311 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0503 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3309 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.1215 - val_mse: 0.1215 - val_mae: 0.3311 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.1218 - val_mse: 0.1218 - val_mae: 0.3315 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3309 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.1213 - val_mse: 0.1213 - val_mae: 0.3308 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0514 - val_loss: 0.1213 - val_mse: 0.1213 - val_mae: 0.3308 - lr: 1.0000e-05 - 199ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.1209 - val_mse: 0.1209 - val_mae: 0.3302 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3295 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.1201 - val_mse: 0.1201 - val_mae: 0.3290 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3289 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.1203 - val_mse: 0.1203 - val_mae: 0.3293 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3290 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0509 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3293 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.1209 - val_mse: 0.1209 - val_mae: 0.3303 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0527 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3298 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0501 - val_loss: 0.1202 - val_mse: 0.1202 - val_mae: 0.3293 - lr: 1.0000e-05 - 206ms/epoch - 4ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.1200 - val_mse: 0.1200 - val_mae: 0.3290 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.1194 - val_mse: 0.1194 - val_mae: 0.3281 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0533 - val_loss: 0.1189 - val_mse: 0.1189 - val_mae: 0.3274 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.1182 - val_mse: 0.1182 - val_mae: 0.3264 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0496 - val_loss: 0.1185 - val_mse: 0.1185 - val_mae: 0.3269 - lr: 1.0000e-05 - 205ms/epoch - 4ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0498 - val_loss: 0.1183 - val_mse: 0.1183 - val_mae: 0.3265 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.1178 - val_mse: 0.1178 - val_mae: 0.3257 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.02569
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0498 - val_loss: 0.1180 - val_mse: 0.1180 - val_mae: 0.3261 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 00069: early stopping
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.41 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.27 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.87 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.62 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.621 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 14:58:57 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.15984, saving model to LSTM3.h5
16/16 - 2s - loss: 0.1221 - mse: 0.1221 - mae: 0.2728 - val_loss: 0.1598 - val_mse: 0.1598 - val_mae: 0.3529 - lr: 0.0010 - 2s/epoch - 156ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.15984 to 0.11744, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0788 - mse: 0.0788 - mae: 0.2263 - val_loss: 0.1174 - val_mse: 0.1174 - val_mae: 0.2992 - lr: 0.0010 - 93ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.11744 to 0.08186, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0321 - mse: 0.0321 - mae: 0.1422 - val_loss: 0.0819 - val_mse: 0.0819 - val_mae: 0.2447 - lr: 0.0010 - 97ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.08186 to 0.05965, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0213 - mse: 0.0213 - mae: 0.1188 - val_loss: 0.0596 - val_mse: 0.0596 - val_mae: 0.2041 - lr: 0.0010 - 92ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.05965 to 0.05652, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0213 - mse: 0.0213 - mae: 0.1151 - val_loss: 0.0565 - val_mse: 0.0565 - val_mae: 0.1988 - lr: 0.0010 - 103ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.05652 to 0.04919, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0893 - val_loss: 0.0492 - val_mse: 0.0492 - val_mae: 0.1834 - lr: 0.0010 - 104ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.04919 to 0.04071, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0874 - val_loss: 0.0407 - val_mse: 0.0407 - val_mae: 0.1639 - lr: 0.0010 - 106ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.04071 to 0.03576, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0778 - val_loss: 0.0358 - val_mse: 0.0358 - val_mae: 0.1518 - lr: 0.0010 - 105ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.03576
16/16 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0812 - val_loss: 0.0411 - val_mse: 0.0411 - val_mae: 0.1654 - lr: 0.0010 - 91ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.03576
16/16 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0784 - val_loss: 0.0360 - val_mse: 0.0360 - val_mae: 0.1526 - lr: 0.0010 - 80ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.03576 to 0.03549, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0742 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1519 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0721 - val_loss: 0.0411 - val_mse: 0.0411 - val_mae: 0.1674 - lr: 0.0010 - 88ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0675 - val_loss: 0.0394 - val_mse: 0.0394 - val_mae: 0.1635 - lr: 0.0010 - 79ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0670 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1653 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0654 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1563 - lr: 0.0010 - 82ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00016: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0648 - val_loss: 0.0424 - val_mse: 0.0424 - val_mae: 0.1744 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0627 - val_loss: 0.0419 - val_mse: 0.0419 - val_mae: 0.1733 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0410 - val_mse: 0.0410 - val_mae: 0.1711 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0645 - val_loss: 0.0397 - val_mse: 0.0397 - val_mae: 0.1678 - lr: 1.0000e-04 - 79ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0624 - val_loss: 0.0391 - val_mse: 0.0391 - val_mae: 0.1660 - lr: 1.0000e-04 - 78ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00021: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0644 - val_loss: 0.0389 - val_mse: 0.0389 - val_mae: 0.1658 - lr: 1.0000e-04 - 90ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.0388 - val_mse: 0.0388 - val_mae: 0.1655 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0627 - val_loss: 0.0387 - val_mse: 0.0387 - val_mae: 0.1652 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0638 - val_loss: 0.0386 - val_mse: 0.0386 - val_mae: 0.1650 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.0386 - val_mse: 0.0386 - val_mae: 0.1647 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00026: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1644 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1643 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0623 - val_loss: 0.0384 - val_mse: 0.0384 - val_mae: 0.1642 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0606 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1640 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1636 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.0382 - val_mse: 0.0382 - val_mae: 0.1637 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0637 - val_loss: 0.0381 - val_mse: 0.0381 - val_mae: 0.1635 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0591 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1631 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0639 - val_loss: 0.0379 - val_mse: 0.0379 - val_mae: 0.1629 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0600 - val_loss: 0.0378 - val_mse: 0.0378 - val_mae: 0.1628 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0627 - val_loss: 0.0377 - val_mse: 0.0377 - val_mae: 0.1624 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0624 - val_loss: 0.0376 - val_mse: 0.0376 - val_mae: 0.1621 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0596 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1618 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0617 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1616 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0627 - val_loss: 0.0373 - val_mse: 0.0373 - val_mae: 0.1613 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0633 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1610 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0607 - val_loss: 0.0370 - val_mse: 0.0370 - val_mae: 0.1606 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0602 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1604 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.0369 - val_mse: 0.0369 - val_mae: 0.1603 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0629 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1602 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0616 - val_loss: 0.0367 - val_mse: 0.0367 - val_mae: 0.1598 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0608 - val_loss: 0.0365 - val_mse: 0.0365 - val_mae: 0.1593 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0595 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1589 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0621 - val_loss: 0.0362 - val_mse: 0.0362 - val_mae: 0.1585 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0637 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1582 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0589 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1580 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0591 - val_loss: 0.0360 - val_mse: 0.0360 - val_mae: 0.1578 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0608 - val_loss: 0.0359 - val_mse: 0.0359 - val_mae: 0.1575 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0620 - val_loss: 0.0359 - val_mse: 0.0359 - val_mae: 0.1575 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0618 - val_loss: 0.0358 - val_mse: 0.0358 - val_mae: 0.1573 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0615 - val_loss: 0.0357 - val_mse: 0.0357 - val_mae: 0.1570 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0628 - val_loss: 0.0356 - val_mse: 0.0356 - val_mae: 0.1567 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03549
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0619 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1565 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss improved from 0.03549 to 0.03541, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1562 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss improved from 0.03541 to 0.03537, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0584 - val_loss: 0.0354 - val_mse: 0.0354 - val_mae: 0.1561 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss improved from 0.03537 to 0.03535, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0631 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1561 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss improved from 0.03535 to 0.03528, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1559 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss improved from 0.03528 to 0.03526, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0563 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1559 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step
Epoch 64/500
Epoch 00064: val_loss improved from 0.03526 to 0.03525, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0621 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1558 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 65/500
Epoch 00065: val_loss improved from 0.03525 to 0.03522, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0628 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1558 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.03522
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0594 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1559 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.03522
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0634 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1561 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.03522
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0619 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1561 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.03522
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0617 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1559 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.03522
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0632 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1559 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 71/500
Epoch 00071: val_loss improved from 0.03522 to 0.03518, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0611 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1558 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 72/500
Epoch 00072: val_loss improved from 0.03518 to 0.03514, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0576 - val_loss: 0.0351 - val_mse: 0.0351 - val_mae: 0.1557 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.03514
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0599 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1557 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 74/500
Epoch 00074: val_loss improved from 0.03514 to 0.03513, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0351 - val_mse: 0.0351 - val_mae: 0.1557 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 75/500
Epoch 00075: val_loss improved from 0.03513 to 0.03511, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0351 - val_mse: 0.0351 - val_mae: 0.1557 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.03511
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0619 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1559 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.03511
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0596 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1559 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.03511
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0605 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1562 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.03511
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0603 - val_loss: 0.0351 - val_mse: 0.0351 - val_mae: 0.1559 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 80/500
Epoch 00080: val_loss improved from 0.03511 to 0.03500, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1554 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.03500
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0609 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1556 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 82/500
Epoch 00082: val_loss improved from 0.03500 to 0.03495, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1553 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 83/500
Epoch 00083: val_loss improved from 0.03495 to 0.03491, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0576 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1552 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 84/500
Epoch 00084: val_loss improved from 0.03491 to 0.03489, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0601 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1551 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 85/500
Epoch 00085: val_loss improved from 0.03489 to 0.03488, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0596 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1551 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.03488
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0611 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1553 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.03488
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0582 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1553 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 88/500
Epoch 00088: val_loss improved from 0.03488 to 0.03485, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0608 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1551 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 89/500
Epoch 00089: val_loss did not improve from 0.03485
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.0349 - val_mse: 0.0349 - val_mae: 0.1552 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 90/500
Epoch 00090: val_loss improved from 0.03485 to 0.03481, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0580 - val_loss: 0.0348 - val_mse: 0.0348 - val_mae: 0.1550 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 91/500
Epoch 00091: val_loss improved from 0.03481 to 0.03474, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0612 - val_loss: 0.0347 - val_mse: 0.0347 - val_mae: 0.1548 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 92/500
Epoch 00092: val_loss improved from 0.03474 to 0.03465, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0599 - val_loss: 0.0347 - val_mse: 0.0347 - val_mae: 0.1546 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 93/500
Epoch 00093: val_loss improved from 0.03465 to 0.03462, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0594 - val_loss: 0.0346 - val_mse: 0.0346 - val_mae: 0.1545 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 94/500
Epoch 00094: val_loss did not improve from 0.03462
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.0347 - val_mse: 0.0347 - val_mae: 0.1547 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 95/500
Epoch 00095: val_loss improved from 0.03462 to 0.03459, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0584 - val_loss: 0.0346 - val_mse: 0.0346 - val_mae: 0.1545 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 96/500
Epoch 00096: val_loss improved from 0.03459 to 0.03446, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0586 - val_loss: 0.0345 - val_mse: 0.0345 - val_mae: 0.1541 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 97/500
Epoch 00097: val_loss did not improve from 0.03446
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0577 - val_loss: 0.0345 - val_mse: 0.0345 - val_mae: 0.1542 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 98/500
Epoch 00098: val_loss did not improve from 0.03446
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0609 - val_loss: 0.0345 - val_mse: 0.0345 - val_mae: 0.1543 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 99/500
Epoch 00099: val_loss improved from 0.03446 to 0.03436, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1539 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step
Epoch 100/500
Epoch 00100: val_loss improved from 0.03436 to 0.03424, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0569 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1536 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 101/500
Epoch 00101: val_loss improved from 0.03424 to 0.03420, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0594 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1534 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 102/500
Epoch 00102: val_loss improved from 0.03420 to 0.03407, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0604 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1531 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 103/500
Epoch 00103: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0612 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1534 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 104/500
Epoch 00104: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0594 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1533 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 105/500
Epoch 00105: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0644 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1533 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 106/500
Epoch 00106: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0632 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1534 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 107/500
Epoch 00107: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0609 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1533 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 108/500
Epoch 00108: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0601 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1536 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 109/500
Epoch 00109: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0610 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1541 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 110/500
Epoch 00110: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0583 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1541 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 111/500
Epoch 00111: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0580 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1543 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 112/500
Epoch 00112: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0612 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1544 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 113/500
Epoch 00113: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1543 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 114/500
Epoch 00114: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0599 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1544 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 115/500
Epoch 00115: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0601 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1540 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 116/500
Epoch 00116: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0609 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1537 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 117/500
Epoch 00117: val_loss did not improve from 0.03407
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1535 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 118/500
Epoch 00118: val_loss improved from 0.03407 to 0.03398, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1532 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 119/500
Epoch 00119: val_loss improved from 0.03398 to 0.03383, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1527 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 120/500
Epoch 00120: val_loss improved from 0.03383 to 0.03373, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1524 - lr: 1.0000e-05 - 123ms/epoch - 8ms/step
Epoch 121/500
Epoch 00121: val_loss improved from 0.03373 to 0.03350, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0592 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1518 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 122/500
Epoch 00122: val_loss improved from 0.03350 to 0.03347, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0593 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1517 - lr: 1.0000e-05 - 126ms/epoch - 8ms/step
Epoch 123/500
Epoch 00123: val_loss improved from 0.03347 to 0.03329, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0621 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1512 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 124/500
Epoch 00124: val_loss improved from 0.03329 to 0.03309, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1506 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 125/500
Epoch 00125: val_loss improved from 0.03309 to 0.03303, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0594 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1504 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 126/500
Epoch 00126: val_loss did not improve from 0.03303
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0569 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1505 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 127/500
Epoch 00127: val_loss improved from 0.03303 to 0.03293, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0607 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1501 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 128/500
Epoch 00128: val_loss improved from 0.03293 to 0.03284, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1499 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 129/500
Epoch 00129: val_loss improved from 0.03284 to 0.03272, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1495 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 130/500
Epoch 00130: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0577 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1496 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 131/500
Epoch 00131: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0610 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1501 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 132/500
Epoch 00132: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0592 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1506 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 133/500
Epoch 00133: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0599 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1510 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 134/500
Epoch 00134: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1505 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 135/500
Epoch 00135: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0570 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1502 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 136/500
Epoch 00136: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0595 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1502 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 137/500
Epoch 00137: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0581 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1502 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 138/500
Epoch 00138: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0600 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1497 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 139/500
Epoch 00139: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0613 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1499 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 140/500
Epoch 00140: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0604 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1501 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 141/500
Epoch 00141: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0583 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1502 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 142/500
Epoch 00142: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1502 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 143/500
Epoch 00143: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0590 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1501 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 144/500
Epoch 00144: val_loss did not improve from 0.03272
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0568 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1500 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 145/500
Epoch 00145: val_loss improved from 0.03272 to 0.03264, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0591 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1496 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 146/500
Epoch 00146: val_loss improved from 0.03264 to 0.03260, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1495 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 147/500
Epoch 00147: val_loss improved from 0.03260 to 0.03258, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1495 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 148/500
Epoch 00148: val_loss improved from 0.03258 to 0.03241, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0555 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1490 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 149/500
Epoch 00149: val_loss improved from 0.03241 to 0.03226, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0561 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1485 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 150/500
Epoch 00150: val_loss improved from 0.03226 to 0.03221, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0590 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1484 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 151/500
Epoch 00151: val_loss improved from 0.03221 to 0.03215, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0566 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1482 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 152/500
Epoch 00152: val_loss improved from 0.03215 to 0.03194, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0571 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1476 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 153/500
Epoch 00153: val_loss improved from 0.03194 to 0.03176, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0578 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1471 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 154/500
Epoch 00154: val_loss improved from 0.03176 to 0.03169, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1469 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 155/500
Epoch 00155: val_loss improved from 0.03169 to 0.03159, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0590 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1466 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 156/500
Epoch 00156: val_loss improved from 0.03159 to 0.03159, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1466 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 157/500
Epoch 00157: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0591 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1468 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 158/500
Epoch 00158: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0561 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1472 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 159/500
Epoch 00159: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0594 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1474 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 160/500
Epoch 00160: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0603 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1476 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 161/500
Epoch 00161: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1477 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 162/500
Epoch 00162: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0575 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1471 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 163/500
Epoch 00163: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1472 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 164/500
Epoch 00164: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0568 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1474 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 165/500
Epoch 00165: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0578 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1476 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 166/500
Epoch 00166: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1479 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 167/500
Epoch 00167: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0601 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1484 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 168/500
Epoch 00168: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0568 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1484 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 169/500
Epoch 00169: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1483 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 170/500
Epoch 00170: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1487 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 171/500
Epoch 00171: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0574 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1493 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 172/500
Epoch 00172: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1490 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 173/500
Epoch 00173: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1493 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 174/500
Epoch 00174: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1495 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 175/500
Epoch 00175: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1497 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 176/500
Epoch 00176: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0561 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1498 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 177/500
Epoch 00177: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0567 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1497 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 178/500
Epoch 00178: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1498 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 179/500
Epoch 00179: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0573 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1495 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 180/500
Epoch 00180: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1494 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 181/500
Epoch 00181: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0549 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1492 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 182/500
Epoch 00182: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0556 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1486 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 183/500
Epoch 00183: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0563 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1483 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 184/500
Epoch 00184: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0560 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1479 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 185/500
Epoch 00185: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0594 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1484 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 186/500
Epoch 00186: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0556 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1485 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 187/500
Epoch 00187: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0553 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1490 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 188/500
Epoch 00188: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0572 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1490 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 189/500
Epoch 00189: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1492 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 190/500
Epoch 00190: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0567 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1491 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 191/500
Epoch 00191: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0575 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1491 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 192/500
Epoch 00192: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0536 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1490 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 193/500
Epoch 00193: val_loss did not improve from 0.03159
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0569 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1478 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 194/500
Epoch 00194: val_loss improved from 0.03159 to 0.03145, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1471 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 195/500
Epoch 00195: val_loss improved from 0.03145 to 0.03107, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1460 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 196/500
Epoch 00196: val_loss improved from 0.03107 to 0.03085, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1453 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 197/500
Epoch 00197: val_loss did not improve from 0.03085
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0549 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1458 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 198/500
Epoch 00198: val_loss did not improve from 0.03085
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1463 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 199/500
Epoch 00199: val_loss did not improve from 0.03085
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0561 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1462 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 200/500
Epoch 00200: val_loss did not improve from 0.03085
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0558 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1460 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 201/500
Epoch 00201: val_loss did not improve from 0.03085
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0523 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1456 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 202/500
Epoch 00202: val_loss improved from 0.03085 to 0.03069, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1449 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 203/500
Epoch 00203: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1451 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 204/500
Epoch 00204: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1453 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 205/500
Epoch 00205: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0562 - val_loss: 0.0307 - val_mse: 0.0307 - val_mae: 0.1450 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 206/500
Epoch 00206: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0555 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1453 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 207/500
Epoch 00207: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1463 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 208/500
Epoch 00208: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0554 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1466 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 209/500
Epoch 00209: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0569 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1465 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 210/500
Epoch 00210: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1464 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 211/500
Epoch 00211: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1464 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 212/500
Epoch 00212: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0579 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1464 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 213/500
Epoch 00213: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1464 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 214/500
Epoch 00214: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0555 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1461 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 215/500
Epoch 00215: val_loss did not improve from 0.03069
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1455 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 216/500
Epoch 00216: val_loss improved from 0.03069 to 0.03062, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1451 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 217/500
Epoch 00217: val_loss improved from 0.03062 to 0.03053, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1448 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 218/500
Epoch 00218: val_loss improved from 0.03053 to 0.03041, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.0304 - val_mse: 0.0304 - val_mae: 0.1445 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 219/500
Epoch 00219: val_loss improved from 0.03041 to 0.03038, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0304 - val_mse: 0.0304 - val_mae: 0.1444 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 220/500
Epoch 00220: val_loss improved from 0.03038 to 0.03010, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.0301 - val_mse: 0.0301 - val_mae: 0.1436 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 221/500
Epoch 00221: val_loss improved from 0.03010 to 0.02986, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0552 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1428 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 222/500
Epoch 00222: val_loss improved from 0.02986 to 0.02981, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1427 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 223/500
Epoch 00223: val_loss improved from 0.02981 to 0.02975, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0555 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1425 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 224/500
Epoch 00224: val_loss did not improve from 0.02975
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0543 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1427 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 225/500
Epoch 00225: val_loss did not improve from 0.02975
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0534 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1426 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 226/500
Epoch 00226: val_loss improved from 0.02975 to 0.02947, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0557 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1417 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 227/500
Epoch 00227: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.0297 - val_mse: 0.0297 - val_mae: 0.1423 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 228/500
Epoch 00228: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0297 - val_mse: 0.0297 - val_mae: 0.1424 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 229/500
Epoch 00229: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0547 - val_loss: 0.0300 - val_mse: 0.0300 - val_mae: 0.1434 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 230/500
Epoch 00230: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0566 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1433 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 231/500
Epoch 00231: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0565 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1427 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 232/500
Epoch 00232: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0560 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1424 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 233/500
Epoch 00233: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0581 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1434 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 234/500
Epoch 00234: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0530 - val_loss: 0.0301 - val_mse: 0.0301 - val_mae: 0.1439 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 235/500
Epoch 00235: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0578 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1433 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 236/500
Epoch 00236: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1421 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 237/500
Epoch 00237: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0544 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1423 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 238/500
Epoch 00238: val_loss did not improve from 0.02947
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1420 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 239/500
Epoch 00239: val_loss improved from 0.02947 to 0.02932, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0293 - val_mse: 0.0293 - val_mae: 0.1415 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 240/500
Epoch 00240: val_loss improved from 0.02932 to 0.02924, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1413 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 241/500
Epoch 00241: val_loss improved from 0.02924 to 0.02923, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1413 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 242/500
Epoch 00242: val_loss improved from 0.02923 to 0.02921, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0549 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1412 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 243/500
Epoch 00243: val_loss improved from 0.02921 to 0.02917, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0554 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1411 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 244/500
Epoch 00244: val_loss improved from 0.02917 to 0.02916, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0548 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1411 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 245/500
Epoch 00245: val_loss improved from 0.02916 to 0.02912, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0548 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1410 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 246/500
Epoch 00246: val_loss did not improve from 0.02912
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0551 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1411 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 247/500
Epoch 00247: val_loss did not improve from 0.02912
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0546 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1411 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 248/500
Epoch 00248: val_loss improved from 0.02912 to 0.02882, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1402 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 249/500
Epoch 00249: val_loss did not improve from 0.02882
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0568 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1403 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 250/500
Epoch 00250: val_loss did not improve from 0.02882
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0544 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1405 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 251/500
Epoch 00251: val_loss did not improve from 0.02882
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0552 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1404 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 252/500
Epoch 00252: val_loss improved from 0.02882 to 0.02875, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1400 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 253/500
Epoch 00253: val_loss did not improve from 0.02875
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1404 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 254/500
Epoch 00254: val_loss did not improve from 0.02875
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1402 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 255/500
Epoch 00255: val_loss improved from 0.02875 to 0.02866, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1398 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 256/500
Epoch 00256: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1401 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 257/500
Epoch 00257: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1407 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 258/500
Epoch 00258: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0553 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1401 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 259/500
Epoch 00259: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1404 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 260/500
Epoch 00260: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1412 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 261/500
Epoch 00261: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0535 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1414 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 262/500
Epoch 00262: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1412 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 263/500
Epoch 00263: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0508 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1403 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 264/500
Epoch 00264: val_loss did not improve from 0.02866
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1403 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 265/500
Epoch 00265: val_loss improved from 0.02866 to 0.02860, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0519 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1398 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 266/500
Epoch 00266: val_loss improved from 0.02860 to 0.02857, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1398 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 267/500
Epoch 00267: val_loss improved from 0.02857 to 0.02838, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0546 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1392 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 268/500
Epoch 00268: val_loss improved from 0.02838 to 0.02837, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1392 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 269/500
Epoch 00269: val_loss improved from 0.02837 to 0.02814, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1385 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 270/500
Epoch 00270: val_loss improved from 0.02814 to 0.02807, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0550 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1382 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 271/500
Epoch 00271: val_loss improved from 0.02807 to 0.02798, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0535 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1380 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 272/500
Epoch 00272: val_loss improved from 0.02798 to 0.02796, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1379 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 273/500
Epoch 00273: val_loss did not improve from 0.02796
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1381 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 274/500
Epoch 00274: val_loss improved from 0.02796 to 0.02790, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0526 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1378 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 275/500
Epoch 00275: val_loss improved from 0.02790 to 0.02780, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0502 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1375 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 276/500
Epoch 00276: val_loss improved from 0.02780 to 0.02779, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1375 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 277/500
Epoch 00277: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1379 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 278/500
Epoch 00278: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0534 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1384 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 279/500
Epoch 00279: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1387 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 280/500
Epoch 00280: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1390 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 281/500
Epoch 00281: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0506 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1384 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 282/500
Epoch 00282: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1383 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 283/500
Epoch 00283: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1383 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 284/500
Epoch 00284: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0558 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1384 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 285/500
Epoch 00285: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1383 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 286/500
Epoch 00286: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0533 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1384 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 287/500
Epoch 00287: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1384 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 288/500
Epoch 00288: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0532 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1386 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 289/500
Epoch 00289: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1378 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 290/500
Epoch 00290: val_loss did not improve from 0.02779
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1378 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 291/500
Epoch 00291: val_loss improved from 0.02779 to 0.02743, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0531 - val_loss: 0.0274 - val_mse: 0.0274 - val_mae: 0.1366 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 292/500
Epoch 00292: val_loss improved from 0.02743 to 0.02703, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0548 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1353 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 293/500
Epoch 00293: val_loss improved from 0.02703 to 0.02689, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0527 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1349 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 294/500
Epoch 00294: val_loss improved from 0.02689 to 0.02665, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1341 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 295/500
Epoch 00295: val_loss did not improve from 0.02665
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1344 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 296/500
Epoch 00296: val_loss improved from 0.02665 to 0.02661, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1340 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 297/500
Epoch 00297: val_loss did not improve from 0.02661
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1343 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 298/500
Epoch 00298: val_loss did not improve from 0.02661
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0527 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1349 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 299/500
Epoch 00299: val_loss improved from 0.02661 to 0.02654, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1339 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 300/500
Epoch 00300: val_loss improved from 0.02654 to 0.02646, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0533 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1337 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 301/500
Epoch 00301: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1342 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 302/500
Epoch 00302: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1350 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 303/500
Epoch 00303: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1357 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 304/500
Epoch 00304: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1357 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 305/500
Epoch 00305: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1360 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 306/500
Epoch 00306: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1359 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 307/500
Epoch 00307: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0531 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1352 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 308/500
Epoch 00308: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0494 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1360 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 309/500
Epoch 00309: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0516 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1359 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 310/500
Epoch 00310: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0544 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1352 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 311/500
Epoch 00311: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0541 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1348 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 312/500
Epoch 00312: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1355 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 313/500
Epoch 00313: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1365 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 314/500
Epoch 00314: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.0273 - val_mse: 0.0273 - val_mae: 0.1368 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 315/500
Epoch 00315: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0508 - val_loss: 0.0276 - val_mse: 0.0276 - val_mae: 0.1375 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 316/500
Epoch 00316: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0549 - val_loss: 0.0274 - val_mse: 0.0274 - val_mae: 0.1372 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 317/500
Epoch 00317: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1365 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 318/500
Epoch 00318: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0525 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1359 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 319/500
Epoch 00319: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0509 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1344 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 320/500
Epoch 00320: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1348 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 321/500
Epoch 00321: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0521 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1356 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 322/500
Epoch 00322: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0558 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1352 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 323/500
Epoch 00323: val_loss did not improve from 0.02646
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0534 - val_loss: 0.0267 - val_mse: 0.0267 - val_mae: 0.1348 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 324/500
Epoch 00324: val_loss improved from 0.02646 to 0.02646, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0538 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1341 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 325/500
Epoch 00325: val_loss improved from 0.02646 to 0.02624, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1335 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 326/500
Epoch 00326: val_loss did not improve from 0.02624
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0549 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1339 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 327/500
Epoch 00327: val_loss did not improve from 0.02624
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0525 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1347 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 328/500
Epoch 00328: val_loss did not improve from 0.02624
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1341 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 329/500
Epoch 00329: val_loss improved from 0.02624 to 0.02613, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0509 - val_loss: 0.0261 - val_mse: 0.0261 - val_mae: 0.1331 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 330/500
Epoch 00330: val_loss improved from 0.02613 to 0.02586, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1323 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 331/500
Epoch 00331: val_loss improved from 0.02586 to 0.02575, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0471 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1319 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 332/500
Epoch 00332: val_loss improved from 0.02575 to 0.02561, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0503 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1315 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 333/500
Epoch 00333: val_loss improved from 0.02561 to 0.02540, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1308 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 334/500
Epoch 00334: val_loss improved from 0.02540 to 0.02523, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1303 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 335/500
Epoch 00335: val_loss did not improve from 0.02523
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1304 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 336/500
Epoch 00336: val_loss improved from 0.02523 to 0.02516, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1300 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 337/500
Epoch 00337: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0482 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1302 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 338/500
Epoch 00338: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1308 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 339/500
Epoch 00339: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1309 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 340/500
Epoch 00340: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1308 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 341/500
Epoch 00341: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1308 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 342/500
Epoch 00342: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1313 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 343/500
Epoch 00343: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0503 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1315 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 344/500
Epoch 00344: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0522 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1319 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 345/500
Epoch 00345: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1319 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 346/500
Epoch 00346: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1309 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 347/500
Epoch 00347: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0533 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1310 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 348/500
Epoch 00348: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0507 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1304 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 349/500
Epoch 00349: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1302 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 350/500
Epoch 00350: val_loss did not improve from 0.02516
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1305 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 351/500
Epoch 00351: val_loss improved from 0.02516 to 0.02512, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0520 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1301 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 352/500
Epoch 00352: val_loss improved from 0.02512 to 0.02486, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0529 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1293 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 353/500
Epoch 00353: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1296 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 354/500
Epoch 00354: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1301 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 355/500
Epoch 00355: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1310 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 356/500
Epoch 00356: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0529 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1312 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 357/500
Epoch 00357: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0521 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1309 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 358/500
Epoch 00358: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1304 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 359/500
Epoch 00359: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0494 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1302 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 360/500
Epoch 00360: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1310 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 361/500
Epoch 00361: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0502 - val_loss: 0.0254 - val_mse: 0.0254 - val_mae: 0.1313 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 362/500
Epoch 00362: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1307 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 363/500
Epoch 00363: val_loss did not improve from 0.02486
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1306 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 364/500
Epoch 00364: val_loss improved from 0.02486 to 0.02479, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0248 - val_mse: 0.0248 - val_mae: 0.1292 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 365/500
Epoch 00365: val_loss improved from 0.02479 to 0.02463, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1287 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 366/500
Epoch 00366: val_loss did not improve from 0.02463
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0527 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1297 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 367/500
Epoch 00367: val_loss did not improve from 0.02463
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1297 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 368/500
Epoch 00368: val_loss did not improve from 0.02463
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1291 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 369/500
Epoch 00369: val_loss did not improve from 0.02463
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0518 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1289 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 370/500
Epoch 00370: val_loss did not improve from 0.02463
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0536 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1289 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 371/500
Epoch 00371: val_loss improved from 0.02463 to 0.02450, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1284 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 372/500
Epoch 00372: val_loss improved from 0.02450 to 0.02412, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0507 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1271 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 373/500
Epoch 00373: val_loss improved from 0.02412 to 0.02408, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0503 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1270 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 374/500
Epoch 00374: val_loss did not improve from 0.02408
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1272 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 375/500
Epoch 00375: val_loss did not improve from 0.02408
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1274 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 376/500
Epoch 00376: val_loss improved from 0.02408 to 0.02405, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1270 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 377/500
Epoch 00377: val_loss did not improve from 0.02405
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0515 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1271 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 378/500
Epoch 00378: val_loss did not improve from 0.02405
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0511 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1270 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 379/500
Epoch 00379: val_loss improved from 0.02405 to 0.02386, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0502 - val_loss: 0.0239 - val_mse: 0.0239 - val_mae: 0.1263 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 380/500
Epoch 00380: val_loss improved from 0.02386 to 0.02352, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1252 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 381/500
Epoch 00381: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1259 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 382/500
Epoch 00382: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.0239 - val_mse: 0.0239 - val_mae: 0.1265 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 383/500
Epoch 00383: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0499 - val_loss: 0.0240 - val_mse: 0.0240 - val_mae: 0.1271 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 384/500
Epoch 00384: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1274 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 385/500
Epoch 00385: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0516 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1272 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 386/500
Epoch 00386: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1275 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 387/500
Epoch 00387: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0486 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1260 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 388/500
Epoch 00388: val_loss did not improve from 0.02352
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0497 - val_loss: 0.0236 - val_mse: 0.0236 - val_mae: 0.1256 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 389/500
Epoch 00389: val_loss improved from 0.02352 to 0.02352, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0494 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1254 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 390/500
Epoch 00390: val_loss improved from 0.02352 to 0.02337, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1249 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 391/500
Epoch 00391: val_loss improved from 0.02337 to 0.02320, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0511 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1243 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 392/500
Epoch 00392: val_loss did not improve from 0.02320
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1246 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 393/500
Epoch 00393: val_loss improved from 0.02320 to 0.02318, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1243 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 394/500
Epoch 00394: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1244 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 395/500
Epoch 00395: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1244 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 396/500
Epoch 00396: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1246 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 397/500
Epoch 00397: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1255 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 398/500
Epoch 00398: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1256 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 399/500
Epoch 00399: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1260 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 400/500
Epoch 00400: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1263 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 401/500
Epoch 00401: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.0236 - val_mse: 0.0236 - val_mae: 0.1259 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 402/500
Epoch 00402: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0507 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1253 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 403/500
Epoch 00403: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1248 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 404/500
Epoch 00404: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1263 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 405/500
Epoch 00405: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.0239 - val_mse: 0.0239 - val_mae: 0.1269 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 406/500
Epoch 00406: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0514 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1262 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 407/500
Epoch 00407: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0236 - val_mse: 0.0236 - val_mae: 0.1260 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 408/500
Epoch 00408: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0496 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1255 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 409/500
Epoch 00409: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1263 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 410/500
Epoch 00410: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0238 - val_mse: 0.0238 - val_mae: 0.1267 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 411/500
Epoch 00411: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0239 - val_mse: 0.0239 - val_mae: 0.1269 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 412/500
Epoch 00412: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0236 - val_mse: 0.0236 - val_mae: 0.1258 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 413/500
Epoch 00413: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0501 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1254 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 414/500
Epoch 00414: val_loss did not improve from 0.02318
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0487 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1250 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 415/500
Epoch 00415: val_loss improved from 0.02318 to 0.02312, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0231 - val_mse: 0.0231 - val_mae: 0.1244 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 416/500
Epoch 00416: val_loss improved from 0.02312 to 0.02293, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1238 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 417/500
Epoch 00417: val_loss improved from 0.02293 to 0.02284, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.0228 - val_mse: 0.0228 - val_mae: 0.1235 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 418/500
Epoch 00418: val_loss did not improve from 0.02284
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0510 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1241 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 419/500
Epoch 00419: val_loss did not improve from 0.02284
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0503 - val_loss: 0.0231 - val_mse: 0.0231 - val_mae: 0.1245 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 420/500
Epoch 00420: val_loss did not improve from 0.02284
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1239 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 421/500
Epoch 00421: val_loss improved from 0.02284 to 0.02278, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0482 - val_loss: 0.0228 - val_mse: 0.0228 - val_mae: 0.1234 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 422/500
Epoch 00422: val_loss improved from 0.02278 to 0.02269, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1231 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 423/500
Epoch 00423: val_loss did not improve from 0.02269
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0228 - val_mse: 0.0228 - val_mae: 0.1235 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 424/500
Epoch 00424: val_loss improved from 0.02269 to 0.02268, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1231 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 425/500
Epoch 00425: val_loss improved from 0.02268 to 0.02262, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0497 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1228 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 426/500
Epoch 00426: val_loss did not improve from 0.02262
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1229 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 427/500
Epoch 00427: val_loss did not improve from 0.02262
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0503 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1231 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 428/500
Epoch 00428: val_loss improved from 0.02262 to 0.02261, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0498 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1229 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 429/500
Epoch 00429: val_loss improved from 0.02261 to 0.02252, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0489 - val_loss: 0.0225 - val_mse: 0.0225 - val_mae: 0.1226 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 430/500
Epoch 00430: val_loss did not improve from 0.02252
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1231 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 431/500
Epoch 00431: val_loss did not improve from 0.02252
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0501 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1240 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 432/500
Epoch 00432: val_loss did not improve from 0.02252
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0491 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1241 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 433/500
Epoch 00433: val_loss did not improve from 0.02252
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0505 - val_loss: 0.0228 - val_mse: 0.0228 - val_mae: 0.1237 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 434/500
Epoch 00434: val_loss did not improve from 0.02252
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0501 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1231 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 435/500
Epoch 00435: val_loss improved from 0.02252 to 0.02229, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0496 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1219 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 436/500
Epoch 00436: val_loss improved from 0.02229 to 0.02221, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1216 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 437/500
Epoch 00437: val_loss did not improve from 0.02221
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0225 - val_mse: 0.0225 - val_mae: 0.1228 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 438/500
Epoch 00438: val_loss did not improve from 0.02221
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.0225 - val_mse: 0.0225 - val_mae: 0.1225 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 439/500
Epoch 00439: val_loss did not improve from 0.02221
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0518 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1219 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 440/500
Epoch 00440: val_loss improved from 0.02221 to 0.02216, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1215 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 441/500
Epoch 00441: val_loss improved from 0.02216 to 0.02199, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0475 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1209 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 442/500
Epoch 00442: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0511 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1213 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 443/500
Epoch 00443: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1222 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 444/500
Epoch 00444: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0485 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1234 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 445/500
Epoch 00445: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0489 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1242 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 446/500
Epoch 00446: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0489 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1242 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 447/500
Epoch 00447: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0473 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1235 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 448/500
Epoch 00448: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0503 - val_loss: 0.0225 - val_mse: 0.0225 - val_mae: 0.1228 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 449/500
Epoch 00449: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0505 - val_loss: 0.0225 - val_mse: 0.0225 - val_mae: 0.1227 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 450/500
Epoch 00450: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0525 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1237 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 451/500
Epoch 00451: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0511 - val_loss: 0.0227 - val_mse: 0.0227 - val_mae: 0.1237 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 452/500
Epoch 00452: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1233 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 453/500
Epoch 00453: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0478 - val_loss: 0.0224 - val_mse: 0.0224 - val_mae: 0.1226 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 454/500
Epoch 00454: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0496 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1221 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 455/500
Epoch 00455: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0500 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1219 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 456/500
Epoch 00456: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0486 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1222 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 457/500
Epoch 00457: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0222 - val_mse: 0.0222 - val_mae: 0.1221 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 458/500
Epoch 00458: val_loss did not improve from 0.02199
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1213 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 459/500
Epoch 00459: val_loss improved from 0.02199 to 0.02181, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0485 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1206 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 460/500
Epoch 00460: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0492 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1207 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 461/500
Epoch 00461: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1209 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 462/500
Epoch 00462: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1209 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 463/500
Epoch 00463: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0505 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1210 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 464/500
Epoch 00464: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1209 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 465/500
Epoch 00465: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0495 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1209 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 466/500
Epoch 00466: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0505 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1211 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 467/500
Epoch 00467: val_loss did not improve from 0.02181
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0510 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1209 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 468/500
Epoch 00468: val_loss improved from 0.02181 to 0.02172, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0492 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1203 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 469/500
Epoch 00469: val_loss did not improve from 0.02172
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1204 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 470/500
Epoch 00470: val_loss improved from 0.02172 to 0.02164, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0486 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1201 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 471/500
Epoch 00471: val_loss improved from 0.02164 to 0.02155, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0488 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1198 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 472/500
Epoch 00472: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0482 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1202 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 473/500
Epoch 00473: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1209 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 474/500
Epoch 00474: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0495 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1215 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 475/500
Epoch 00475: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1210 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 476/500
Epoch 00476: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1201 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 477/500
Epoch 00477: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0495 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1203 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 478/500
Epoch 00478: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1207 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 479/500
Epoch 00479: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0471 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1209 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 480/500
Epoch 00480: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0487 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1206 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 481/500
Epoch 00481: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1203 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 482/500
Epoch 00482: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1205 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 483/500
Epoch 00483: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0485 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1208 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 484/500
Epoch 00484: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0495 - val_loss: 0.0219 - val_mse: 0.0219 - val_mae: 0.1213 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 485/500
Epoch 00485: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0460 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1210 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 486/500
Epoch 00486: val_loss did not improve from 0.02155
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1206 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 487/500
Epoch 00487: val_loss improved from 0.02155 to 0.02147, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0501 - val_loss: 0.0215 - val_mse: 0.0215 - val_mae: 0.1197 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 488/500
Epoch 00488: val_loss improved from 0.02147 to 0.02127, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0503 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1190 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 489/500
Epoch 00489: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1191 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 490/500
Epoch 00490: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0482 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1197 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 491/500
Epoch 00491: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1202 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 492/500
Epoch 00492: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0476 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1197 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 493/500
Epoch 00493: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0506 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1196 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 494/500
Epoch 00494: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0516 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1202 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 495/500
Epoch 00495: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0217 - val_mse: 0.0217 - val_mae: 0.1207 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 496/500
Epoch 00496: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0484 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1205 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 497/500
Epoch 00497: val_loss did not improve from 0.02127
16/16 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0496 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1198 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 498/500
Epoch 00498: val_loss improved from 0.02127 to 0.02122, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0212 - val_mse: 0.0212 - val_mae: 0.1190 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 499/500
Epoch 00499: val_loss did not improve from 0.02122
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1193 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 500/500
Epoch 00500: val_loss improved from 0.02122 to 0.02099, saving model to LSTM3.h5
16/16 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0496 - val_loss: 0.0210 - val_mse: 0.0210 - val_mae: 0.1183 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 143.9522591181831
RMSE: 11.998010631691534
MAPE: 10.07848404711658
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.40 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.26 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.24 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.45 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.750 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 15:01:30 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.02131, saving model to LSTM3.h5
17/17 - 2s - loss: 0.8890 - mse: 0.8890 - mae: 0.7526 - val_loss: 0.0213 - val_mse: 0.0213 - val_mae: 0.1190 - lr: 0.0010 - 2s/epoch - 127ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.02131 to 0.02058, saving model to LSTM3.h5
17/17 - 0s - loss: 0.1482 - mse: 0.1482 - mae: 0.3315 - val_loss: 0.0206 - val_mse: 0.0206 - val_mae: 0.1164 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0742 - mse: 0.0742 - mae: 0.2245 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1535 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0478 - mse: 0.0478 - mae: 0.1730 - val_loss: 0.0454 - val_mse: 0.0454 - val_mae: 0.1777 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0391 - mse: 0.0391 - mae: 0.1575 - val_loss: 0.0570 - val_mse: 0.0570 - val_mae: 0.2042 - lr: 0.0010 - 87ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0314 - mse: 0.0314 - mae: 0.1422 - val_loss: 0.0658 - val_mse: 0.0658 - val_mae: 0.2229 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00007: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0287 - mse: 0.0287 - mae: 0.1361 - val_loss: 0.0708 - val_mse: 0.0708 - val_mae: 0.2332 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0265 - mse: 0.0265 - mae: 0.1282 - val_loss: 0.0709 - val_mse: 0.0709 - val_mae: 0.2336 - lr: 1.0000e-04 - 107ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0251 - mse: 0.0251 - mae: 0.1272 - val_loss: 0.0713 - val_mse: 0.0713 - val_mae: 0.2345 - lr: 1.0000e-04 - 117ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0249 - mse: 0.0249 - mae: 0.1266 - val_loss: 0.0718 - val_mse: 0.0718 - val_mae: 0.2355 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0238 - mse: 0.0238 - mae: 0.1242 - val_loss: 0.0720 - val_mse: 0.0720 - val_mae: 0.2359 - lr: 1.0000e-04 - 103ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00012: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0255 - mse: 0.0255 - mae: 0.1280 - val_loss: 0.0724 - val_mse: 0.0724 - val_mae: 0.2369 - lr: 1.0000e-04 - 106ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0245 - mse: 0.0245 - mae: 0.1244 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2370 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0245 - mse: 0.0245 - mae: 0.1249 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2370 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0255 - mse: 0.0255 - mae: 0.1276 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2371 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0246 - mse: 0.0246 - mae: 0.1245 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2371 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00017: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0242 - mse: 0.0242 - mae: 0.1238 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2372 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0242 - mse: 0.0242 - mae: 0.1228 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2372 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0241 - mse: 0.0241 - mae: 0.1229 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2372 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0248 - mse: 0.0248 - mae: 0.1262 - val_loss: 0.0726 - val_mse: 0.0726 - val_mae: 0.2374 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0248 - mse: 0.0248 - mae: 0.1246 - val_loss: 0.0727 - val_mse: 0.0727 - val_mae: 0.2375 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0255 - mse: 0.0255 - mae: 0.1286 - val_loss: 0.0727 - val_mse: 0.0727 - val_mae: 0.2376 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0242 - mse: 0.0242 - mae: 0.1249 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2376 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0249 - mse: 0.0249 - mae: 0.1279 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2377 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0242 - mse: 0.0242 - mae: 0.1231 - val_loss: 0.0729 - val_mse: 0.0729 - val_mae: 0.2378 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0234 - mse: 0.0234 - mae: 0.1214 - val_loss: 0.0729 - val_mse: 0.0729 - val_mae: 0.2380 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0228 - mse: 0.0228 - mae: 0.1210 - val_loss: 0.0730 - val_mse: 0.0730 - val_mae: 0.2382 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0221 - mse: 0.0221 - mae: 0.1218 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2383 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0245 - mse: 0.0245 - mae: 0.1267 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2383 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0262 - mse: 0.0262 - mae: 0.1286 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2384 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0245 - mse: 0.0245 - mae: 0.1259 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2384 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0248 - mse: 0.0248 - mae: 0.1273 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2383 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0249 - mse: 0.0249 - mae: 0.1275 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2384 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0223 - mse: 0.0223 - mae: 0.1180 - val_loss: 0.0731 - val_mse: 0.0731 - val_mae: 0.2385 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0229 - mse: 0.0229 - mae: 0.1227 - val_loss: 0.0732 - val_mse: 0.0732 - val_mae: 0.2387 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0252 - mse: 0.0252 - mae: 0.1272 - val_loss: 0.0733 - val_mse: 0.0733 - val_mae: 0.2388 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0254 - mse: 0.0254 - mae: 0.1276 - val_loss: 0.0733 - val_mse: 0.0733 - val_mae: 0.2389 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0230 - mse: 0.0230 - mae: 0.1210 - val_loss: 0.0735 - val_mse: 0.0735 - val_mae: 0.2392 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0245 - mse: 0.0245 - mae: 0.1249 - val_loss: 0.0735 - val_mse: 0.0735 - val_mae: 0.2392 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0219 - mse: 0.0219 - mae: 0.1178 - val_loss: 0.0735 - val_mse: 0.0735 - val_mae: 0.2394 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0240 - mse: 0.0240 - mae: 0.1242 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2395 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0239 - mse: 0.0239 - mae: 0.1250 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2396 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0243 - mse: 0.0243 - mae: 0.1244 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2395 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0228 - mse: 0.0228 - mae: 0.1221 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2395 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0219 - mse: 0.0219 - mae: 0.1177 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2396 - lr: 1.0000e-05 - 92ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0217 - mse: 0.0217 - mae: 0.1162 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2397 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0265 - mse: 0.0265 - mae: 0.1279 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2396 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0236 - mse: 0.0236 - mae: 0.1218 - val_loss: 0.0736 - val_mse: 0.0736 - val_mae: 0.2396 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0226 - mse: 0.0226 - mae: 0.1212 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2397 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0227 - mse: 0.0227 - mae: 0.1186 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2397 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0237 - mse: 0.0237 - mae: 0.1222 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2398 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.02058
17/17 - 0s - loss: 0.0208 - mse: 0.0208 - mae: 0.1157 - val_loss: 0.0737 - val_mse: 0.0737 - val_mae: 0.2399 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 00052: early stopping
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 143.9522591181831
RMSE: 11.998010631691534
MAPE: 10.07848404711658
WMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 24.586224407987817
RMSE: 4.958449798877449
MAPE: 3.970226889097132
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.42 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.38 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.91 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.90 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.040 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 15:02:45 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.14531, saving model to LSTM3.h5
10/10 - 3s - loss: 0.4588 - mse: 0.4588 - mae: 0.5822 - val_loss: 0.1453 - val_mse: 0.1453 - val_mae: 0.3471 - lr: 0.0010 - 3s/epoch - 252ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.14531 to 0.06661, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0766 - mse: 0.0766 - mae: 0.2246 - val_loss: 0.0666 - val_mse: 0.0666 - val_mae: 0.2219 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.06661 to 0.03347, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0677 - mse: 0.0677 - mae: 0.2234 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1513 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.03347 to 0.01798, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0283 - mse: 0.0283 - mae: 0.1346 - val_loss: 0.0180 - val_mse: 0.0180 - val_mae: 0.1095 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.01798 to 0.01389, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0252 - mse: 0.0252 - mae: 0.1227 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0965 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.01389 to 0.01325, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0227 - mse: 0.0227 - mae: 0.1170 - val_loss: 0.0132 - val_mse: 0.0132 - val_mae: 0.0939 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.01325
10/10 - 0s - loss: 0.0199 - mse: 0.0199 - mae: 0.1116 - val_loss: 0.0133 - val_mse: 0.0133 - val_mae: 0.0931 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.01325 to 0.01313, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0160 - mse: 0.0160 - mae: 0.1009 - val_loss: 0.0131 - val_mse: 0.0131 - val_mae: 0.0922 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.01313 to 0.01294, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0150 - mse: 0.0150 - mae: 0.0960 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0916 - lr: 0.0010 - 85ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.01294 to 0.01274, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0896 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0911 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.01274 to 0.01251, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0880 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0907 - lr: 0.0010 - 82ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.01251
10/10 - 0s - loss: 0.0113 - mse: 0.0113 - mae: 0.0834 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0921 - lr: 0.0010 - 82ms/epoch - 8ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.01251 to 0.01239, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0834 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0910 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.01239 to 0.01209, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0797 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0898 - lr: 0.0010 - 86ms/epoch - 9ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01209
10/10 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0784 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0904 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01209
10/10 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0765 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0905 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.01209 to 0.01191, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0755 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0895 - lr: 0.0010 - 96ms/epoch - 10ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.01191 to 0.01123, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0719 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0859 - lr: 0.0010 - 74ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.01123 to 0.01102, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0685 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0853 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.01102 to 0.01081, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0697 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0845 - lr: 0.0010 - 76ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.01081 to 0.01046, saving model to LSTM3.h5
10/10 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0710 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0826 - lr: 0.0010 - 88ms/epoch - 9ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0675 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0860 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0681 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0880 - lr: 0.0010 - 64ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0666 - val_loss: 0.0131 - val_mse: 0.0131 - val_mae: 0.0949 - lr: 0.0010 - 58ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0656 - val_loss: 0.0126 - val_mse: 0.0126 - val_mae: 0.0934 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00026: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0623 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0930 - lr: 0.0010 - 61ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0628 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0925 - lr: 1.0000e-04 - 65ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0619 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-04 - 64ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0602 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0931 - lr: 1.0000e-04 - 70ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0609 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0932 - lr: 1.0000e-04 - 70ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00031: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0638 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0928 - lr: 1.0000e-04 - 73ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0611 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0618 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0928 - lr: 1.0000e-05 - 59ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0616 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0929 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0603 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0930 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00036: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0930 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0669 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0930 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0633 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0931 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0644 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0931 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0620 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0930 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0644 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0929 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0629 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0930 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0598 - val_loss: 0.0125 - val_mse: 0.0125 - val_mae: 0.0929 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0631 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0928 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0637 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0608 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0621 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0927 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0614 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0925 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0621 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0924 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0603 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0623 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0922 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0652 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0921 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0620 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0921 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0605 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0922 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0647 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0922 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0922 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0646 - val_loss: 0.0122 - val_mse: 0.0122 - val_mae: 0.0921 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0624 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0921 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0625 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0922 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0923 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0623 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0924 - lr: 1.0000e-05 - 65ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0594 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0925 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0925 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0608 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0926 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0926 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.01046
10/10 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0612 - val_loss: 0.0123 - val_mse: 0.0123 - val_mae: 0.0925 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 00071: early stopping
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 143.9522591181831
RMSE: 11.998010631691534
MAPE: 10.07848404711658
WMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 24.586224407987817
RMSE: 4.958449798877449
MAPE: 3.970226889097132
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 207.2547601932076
RMSE: 14.3963453762824
MAPE: 12.894635987621164
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.18 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.70 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.24 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.989 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 15:04:02 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.52870, saving model to LSTM3.h5
45/45 - 2s - loss: 0.1434 - mse: 0.1434 - mae: 0.3013 - val_loss: 0.5287 - val_mse: 0.5287 - val_mae: 0.6882 - lr: 0.0010 - 2s/epoch - 49ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.52870 to 0.22252, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0645 - mse: 0.0645 - mae: 0.2074 - val_loss: 0.2225 - val_mse: 0.2225 - val_mae: 0.4296 - lr: 0.0010 - 216ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.22252 to 0.11434, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0430 - mse: 0.0430 - mae: 0.1667 - val_loss: 0.1143 - val_mse: 0.1143 - val_mae: 0.2943 - lr: 0.0010 - 240ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.11434 to 0.07710, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0251 - mse: 0.0251 - mae: 0.1281 - val_loss: 0.0771 - val_mse: 0.0771 - val_mae: 0.2329 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.07710 to 0.05932, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0175 - mse: 0.0175 - mae: 0.1066 - val_loss: 0.0593 - val_mse: 0.0593 - val_mae: 0.1992 - lr: 0.0010 - 237ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.05932 to 0.05014, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0134 - mse: 0.0134 - mae: 0.0935 - val_loss: 0.0501 - val_mse: 0.0501 - val_mae: 0.1808 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05014
45/45 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0837 - val_loss: 0.0542 - val_mse: 0.0542 - val_mae: 0.1922 - lr: 0.0010 - 251ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05014
45/45 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0823 - val_loss: 0.0536 - val_mse: 0.0536 - val_mae: 0.1921 - lr: 0.0010 - 189ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.05014 to 0.04707, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0739 - val_loss: 0.0471 - val_mse: 0.0471 - val_mae: 0.1781 - lr: 0.0010 - 213ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04707
45/45 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0764 - val_loss: 0.0554 - val_mse: 0.0554 - val_mae: 0.1979 - lr: 0.0010 - 187ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.04707 to 0.04541, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0707 - val_loss: 0.0454 - val_mse: 0.0454 - val_mae: 0.1760 - lr: 0.0010 - 255ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04541
45/45 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0721 - val_loss: 0.0535 - val_mse: 0.0535 - val_mae: 0.1954 - lr: 0.0010 - 220ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.04541 to 0.04341, saving model to LSTM3.h5
45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0644 - val_loss: 0.0434 - val_mse: 0.0434 - val_mae: 0.1726 - lr: 0.0010 - 245ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0645 - val_loss: 0.0495 - val_mse: 0.0495 - val_mae: 0.1879 - lr: 0.0010 - 178ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0594 - val_loss: 0.0508 - val_mse: 0.0508 - val_mae: 0.1916 - lr: 0.0010 - 241ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0615 - val_loss: 0.0451 - val_mse: 0.0451 - val_mae: 0.1783 - lr: 0.0010 - 243ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.0724 - val_mse: 0.0724 - val_mae: 0.2376 - lr: 0.0010 - 264ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00018: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0561 - val_loss: 0.0595 - val_mse: 0.0595 - val_mae: 0.2114 - lr: 0.0010 - 229ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0594 - val_mse: 0.0594 - val_mae: 0.2112 - lr: 1.0000e-04 - 224ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0508 - val_loss: 0.0593 - val_mse: 0.0593 - val_mae: 0.2112 - lr: 1.0000e-04 - 232ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0597 - val_mse: 0.0597 - val_mae: 0.2120 - lr: 1.0000e-04 - 218ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0601 - val_mse: 0.0601 - val_mae: 0.2130 - lr: 1.0000e-04 - 265ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00023: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0597 - val_mse: 0.0597 - val_mae: 0.2124 - lr: 1.0000e-04 - 231ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0495 - val_loss: 0.0599 - val_mse: 0.0599 - val_mae: 0.2128 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2126 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0482 - val_loss: 0.0599 - val_mse: 0.0599 - val_mae: 0.2127 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0492 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2125 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00028: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0597 - val_mse: 0.0597 - val_mae: 0.2123 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0496 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2126 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0459 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2126 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2125 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0598 - val_mse: 0.0598 - val_mae: 0.2126 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0500 - val_loss: 0.0599 - val_mse: 0.0599 - val_mae: 0.2128 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0493 - val_loss: 0.0597 - val_mse: 0.0597 - val_mae: 0.2124 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0596 - val_mse: 0.0596 - val_mae: 0.2121 - lr: 1.0000e-05 - 198ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0496 - val_loss: 0.0594 - val_mse: 0.0594 - val_mae: 0.2118 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0490 - val_loss: 0.0595 - val_mse: 0.0595 - val_mae: 0.2120 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0485 - val_loss: 0.0593 - val_mse: 0.0593 - val_mae: 0.2115 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0482 - val_loss: 0.0592 - val_mse: 0.0592 - val_mae: 0.2113 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0591 - val_mse: 0.0591 - val_mae: 0.2110 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0589 - val_mse: 0.0589 - val_mae: 0.2107 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.0589 - val_mse: 0.0589 - val_mae: 0.2106 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0588 - val_mse: 0.0588 - val_mae: 0.2105 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0502 - val_loss: 0.0585 - val_mse: 0.0585 - val_mae: 0.2097 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0490 - val_loss: 0.0584 - val_mse: 0.0584 - val_mae: 0.2096 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0582 - val_mse: 0.0582 - val_mae: 0.2091 - lr: 1.0000e-05 - 194ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0481 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2087 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0496 - val_loss: 0.0578 - val_mse: 0.0578 - val_mae: 0.2083 - lr: 1.0000e-05 - 200ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0579 - val_mse: 0.0579 - val_mae: 0.2085 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0512 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2087 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.2079 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0489 - val_loss: 0.0577 - val_mse: 0.0577 - val_mae: 0.2080 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0499 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.2079 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0487 - val_loss: 0.0576 - val_mse: 0.0576 - val_mae: 0.2079 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0485 - val_loss: 0.0578 - val_mse: 0.0578 - val_mae: 0.2082 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0505 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2089 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0476 - val_loss: 0.0582 - val_mse: 0.0582 - val_mae: 0.2092 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0485 - val_loss: 0.0582 - val_mse: 0.0582 - val_mae: 0.2091 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0503 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2088 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0468 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2087 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.2087 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0488 - val_loss: 0.0581 - val_mse: 0.0581 - val_mae: 0.2089 - lr: 1.0000e-05 - 199ms/epoch - 4ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.04341
45/45 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0488 - val_loss: 0.0581 - val_mse: 0.0581 - val_mae: 0.2091 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 00063: early stopping
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 143.9522591181831
RMSE: 11.998010631691534
MAPE: 10.07848404711658
WMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 24.586224407987817
RMSE: 4.958449798877449
MAPE: 3.970226889097132
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 207.2547601932076
RMSE: 14.3963453762824
MAPE: 12.894635987621164
KAMA
Prediction vs Close: 50.75% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 23.743754657069395
RMSE: 4.872756371610364
MAPE: 3.7850733762502107
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.26 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.24 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.84 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.23 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.154 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 15:05:27 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.09371, saving model to LSTM3.h5
58/58 - 3s - loss: 0.1380 - mse: 0.1380 - mae: 0.2774 - val_loss: 0.0937 - val_mse: 0.0937 - val_mae: 0.2769 - lr: 0.0010 - 3s/epoch - 48ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.09371 to 0.04685, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0497 - mse: 0.0497 - mae: 0.1767 - val_loss: 0.0468 - val_mse: 0.0468 - val_mae: 0.1731 - lr: 0.0010 - 303ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.04685 to 0.03945, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0222 - mse: 0.0222 - mae: 0.1174 - val_loss: 0.0395 - val_mse: 0.0395 - val_mae: 0.1598 - lr: 0.0010 - 277ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.03945
58/58 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0941 - val_loss: 0.0437 - val_mse: 0.0437 - val_mae: 0.1682 - lr: 0.0010 - 269ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.03945
58/58 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0960 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1632 - lr: 0.0010 - 241ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.03945
58/58 - 0s - loss: 0.0115 - mse: 0.0115 - mae: 0.0853 - val_loss: 0.0458 - val_mse: 0.0458 - val_mae: 0.1734 - lr: 0.0010 - 275ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.03945
58/58 - 0s - loss: 0.0107 - mse: 0.0107 - mae: 0.0824 - val_loss: 0.0433 - val_mse: 0.0433 - val_mae: 0.1664 - lr: 0.0010 - 253ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00008: val_loss did not improve from 0.03945
58/58 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0704 - val_loss: 0.0470 - val_mse: 0.0470 - val_mae: 0.1766 - lr: 0.0010 - 252ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.03945 to 0.03681, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0829 - val_loss: 0.0368 - val_mse: 0.0368 - val_mae: 0.1519 - lr: 1.0000e-04 - 287ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.03681 to 0.03586, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0717 - val_loss: 0.0359 - val_mse: 0.0359 - val_mae: 0.1489 - lr: 1.0000e-04 - 332ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.03586 to 0.03285, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0682 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1408 - lr: 1.0000e-04 - 299ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.03285
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0693 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1419 - lr: 1.0000e-04 - 243ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.03285 to 0.03201, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0667 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1381 - lr: 1.0000e-04 - 273ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.03201 to 0.03175, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0645 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1373 - lr: 1.0000e-04 - 254ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1391 - lr: 1.0000e-04 - 262ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0624 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1373 - lr: 1.0000e-04 - 245ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0587 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1394 - lr: 1.0000e-04 - 296ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0608 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1385 - lr: 1.0000e-04 - 292ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00019: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1399 - lr: 1.0000e-04 - 266ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0599 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1410 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0561 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1409 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1407 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0576 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1403 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00024: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0573 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1400 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1403 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0558 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1410 - lr: 1.0000e-05 - 236ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0605 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1412 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0595 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1416 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0584 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1416 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0591 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1424 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0592 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1420 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0568 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1423 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0552 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1426 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0592 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1437 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0568 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1441 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1444 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0558 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1444 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1437 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0574 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1438 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0543 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1424 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1411 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0566 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1405 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0572 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1392 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0561 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1400 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0545 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1403 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0588 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1404 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0564 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1395 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0553 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1390 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1388 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1392 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1392 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1399 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1402 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0581 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1403 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0524 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1405 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0526 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1407 - lr: 1.0000e-05 - 257ms/epoch - 4ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0556 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1404 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0536 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1397 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0553 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1388 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1382 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.03175
58/58 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0579 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1384 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 62/500
Epoch 00062: val_loss improved from 0.03175 to 0.03167, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0558 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1375 - lr: 1.0000e-05 - 441ms/epoch - 8ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1376 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0523 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1383 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0548 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1382 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0534 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1379 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0582 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1382 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1401 - lr: 1.0000e-05 - 236ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0529 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1405 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0557 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1404 - lr: 1.0000e-05 - 329ms/epoch - 6ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1402 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0522 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1391 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0523 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1402 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0540 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1409 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1411 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0541 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1416 - lr: 1.0000e-05 - 239ms/epoch - 4ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0531 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1408 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1387 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0516 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1384 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0545 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1406 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0536 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1390 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0529 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1380 - lr: 1.0000e-05 - 246ms/epoch - 4ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0511 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1380 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.03167
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0547 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1393 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 85/500
Epoch 00085: val_loss improved from 0.03167 to 0.03129, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1369 - lr: 1.0000e-05 - 428ms/epoch - 7ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.03129
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1382 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.03129
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0512 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1397 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 88/500
Epoch 00088: val_loss did not improve from 0.03129
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0530 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1402 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 89/500
Epoch 00089: val_loss did not improve from 0.03129
58/58 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0533 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1394 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 90/500
Epoch 00090: val_loss did not improve from 0.03129
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1369 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 91/500
Epoch 00091: val_loss improved from 0.03129 to 0.03083, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0522 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1357 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 92/500
Epoch 00092: val_loss improved from 0.03083 to 0.03053, saving model to LSTM3.h5
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0515 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1349 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 93/500
Epoch 00093: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1371 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 94/500
Epoch 00094: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0518 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1373 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 95/500
Epoch 00095: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.0309 - val_mse: 0.0309 - val_mae: 0.1360 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 96/500
Epoch 00096: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1361 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 97/500
Epoch 00097: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0531 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1382 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 98/500
Epoch 00098: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0512 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1361 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 99/500
Epoch 00099: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1375 - lr: 1.0000e-05 - 246ms/epoch - 4ms/step
Epoch 100/500
Epoch 00100: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0516 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1377 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 101/500
Epoch 00101: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0521 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1381 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 102/500
Epoch 00102: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0528 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1369 - lr: 1.0000e-05 - 257ms/epoch - 4ms/step
Epoch 103/500
Epoch 00103: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1373 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 104/500
Epoch 00104: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0495 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1386 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 105/500
Epoch 00105: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0516 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1386 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 106/500
Epoch 00106: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0523 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1377 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 107/500
Epoch 00107: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1383 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 108/500
Epoch 00108: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0499 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1377 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 109/500
Epoch 00109: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0498 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1383 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 110/500
Epoch 00110: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0471 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1383 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 111/500
Epoch 00111: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1383 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 112/500
Epoch 00112: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1382 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 113/500
Epoch 00113: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0475 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1388 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step
Epoch 114/500
Epoch 00114: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1371 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 115/500
Epoch 00115: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0478 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1386 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 116/500
Epoch 00116: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1403 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 117/500
Epoch 00117: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1420 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 118/500
Epoch 00118: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0528 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1425 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 119/500
Epoch 00119: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1416 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 120/500
Epoch 00120: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0493 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1404 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 121/500
Epoch 00121: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0506 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1409 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 122/500
Epoch 00122: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0514 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1418 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 123/500
Epoch 00123: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0491 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1426 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 124/500
Epoch 00124: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0496 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1419 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 125/500
Epoch 00125: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0491 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1424 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 126/500
Epoch 00126: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0501 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1420 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 127/500
Epoch 00127: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0504 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1425 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 128/500
Epoch 00128: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0495 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1401 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 129/500
Epoch 00129: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0502 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1405 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 130/500
Epoch 00130: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0506 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1396 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 131/500
Epoch 00131: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0496 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1398 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 132/500
Epoch 00132: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1409 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 133/500
Epoch 00133: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0498 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1404 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 134/500
Epoch 00134: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1418 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 135/500
Epoch 00135: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0487 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1432 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 136/500
Epoch 00136: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0496 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1446 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 137/500
Epoch 00137: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0501 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1438 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 138/500
Epoch 00138: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0481 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1437 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 139/500
Epoch 00139: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1441 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 140/500
Epoch 00140: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0477 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1427 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 141/500
Epoch 00141: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0483 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1424 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 142/500
Epoch 00142: val_loss did not improve from 0.03053
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0485 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1422 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 00142: early stopping
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 143.9522591181831
RMSE: 11.998010631691534
MAPE: 10.07848404711658
WMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 24.586224407987817
RMSE: 4.958449798877449
MAPE: 3.970226889097132
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 207.2547601932076
RMSE: 14.3963453762824
MAPE: 12.894635987621164
KAMA
Prediction vs Close: 50.75% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 23.743754657069395
RMSE: 4.872756371610364
MAPE: 3.7850733762502107
MIDPOINT
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 35.62442093531873
RMSE: 5.968619684258559
MAPE: 5.0490603478808165
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.38 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.42 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.58 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.18 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.155 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 15:07:20 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.11354, saving model to LSTM3.h5
43/43 - 2s - loss: 0.5101 - mse: 0.5101 - mae: 0.5223 - val_loss: 0.1135 - val_mse: 0.1135 - val_mae: 0.3053 - lr: 0.0010 - 2s/epoch - 53ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.11354 to 0.05364, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0361 - mse: 0.0361 - mae: 0.1535 - val_loss: 0.0536 - val_mse: 0.0536 - val_mae: 0.2042 - lr: 0.0010 - 214ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.05364 to 0.04216, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0328 - mse: 0.0328 - mae: 0.1459 - val_loss: 0.0422 - val_mse: 0.0422 - val_mae: 0.1788 - lr: 0.0010 - 236ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.04216 to 0.03063, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0282 - mse: 0.0282 - mae: 0.1364 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1492 - lr: 0.0010 - 244ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.03063 to 0.02455, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0227 - mse: 0.0227 - mae: 0.1199 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1304 - lr: 0.0010 - 224ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.02455
43/43 - 0s - loss: 0.0192 - mse: 0.0192 - mae: 0.1128 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1385 - lr: 0.0010 - 181ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.02455
43/43 - 0s - loss: 0.0182 - mse: 0.0182 - mae: 0.1073 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1483 - lr: 0.0010 - 212ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.02455
43/43 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1051 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1389 - lr: 0.0010 - 211ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.02455 to 0.01847, saving model to LSTM3.h5
43/43 - 0s - loss: 0.0149 - mse: 0.0149 - mae: 0.0974 - val_loss: 0.0185 - val_mse: 0.0185 - val_mae: 0.1060 - lr: 0.0010 - 221ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0908 - val_loss: 0.0191 - val_mse: 0.0191 - val_mae: 0.1078 - lr: 0.0010 - 212ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0856 - val_loss: 0.0205 - val_mse: 0.0205 - val_mae: 0.1123 - lr: 0.0010 - 179ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0105 - mse: 0.0105 - mae: 0.0826 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1476 - lr: 0.0010 - 235ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0758 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1261 - lr: 0.0010 - 186ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00014: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0740 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1306 - lr: 0.0010 - 183ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0681 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1319 - lr: 1.0000e-04 - 197ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0692 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1309 - lr: 1.0000e-04 - 181ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0661 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1338 - lr: 1.0000e-04 - 235ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0650 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1384 - lr: 1.0000e-04 - 224ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00019: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0653 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1396 - lr: 1.0000e-04 - 202ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0643 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1399 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0679 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1400 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0610 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1403 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0645 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1407 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00024: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0647 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1408 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0632 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1408 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0639 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1403 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0662 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1405 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0663 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1402 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0652 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1406 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0631 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1412 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0633 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1414 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0630 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1416 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0610 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1418 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0646 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1418 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0627 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1413 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0644 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1412 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0654 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1405 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0636 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1401 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0614 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1400 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0642 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1404 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0655 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1403 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0680 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1398 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0646 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1400 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0652 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1400 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0652 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1397 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0653 - val_loss: 0.0277 - val_mse: 0.0277 - val_mae: 0.1383 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0609 - val_loss: 0.0277 - val_mse: 0.0277 - val_mae: 0.1383 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0622 - val_loss: 0.0277 - val_mse: 0.0277 - val_mae: 0.1383 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0618 - val_loss: 0.0277 - val_mse: 0.0277 - val_mae: 0.1383 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0638 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1393 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0654 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1398 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0597 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1406 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0619 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1408 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0623 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1415 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0629 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1415 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1404 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0606 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1402 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0633 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1395 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01847
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0613 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1399 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 00059: early stopping
SMA
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.725767438505336
RMSE: 5.7206439706125165
MAPE: 4.798603095387009
EMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.9% Accuracy
MSE: 143.9522591181831
RMSE: 11.998010631691534
MAPE: 10.07848404711658
WMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.76% Accuracy
MSE: 24.586224407987817
RMSE: 4.958449798877449
MAPE: 3.970226889097132
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 51.49% Accuracy
MSE: 207.2547601932076
RMSE: 14.3963453762824
MAPE: 12.894635987621164
KAMA
Prediction vs Close: 50.75% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 23.743754657069395
RMSE: 4.872756371610364
MAPE: 3.7850733762502107
MIDPOINT
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 35.62442093531873
RMSE: 5.968619684258559
MAPE: 5.0490603478808165
T3
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 103.73535640918065
RMSE: 10.185055542763655
MAPE: 8.016244139827235
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.47 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.26 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.17 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.77 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.17 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.053 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 15:08:46 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.10461, saving model to LSTM3.h5
90/90 - 2s - loss: 0.0205 - mse: 0.0205 - mae: 0.1144 - val_loss: 0.1046 - val_mse: 0.1046 - val_mae: 0.2573 - lr: 0.0010 - 2s/epoch - 27ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.10461
90/90 - 0s - loss: 0.0231 - mse: 0.0231 - mae: 0.1277 - val_loss: 0.1193 - val_mse: 0.1193 - val_mae: 0.2715 - lr: 0.0010 - 362ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.10461 to 0.07773, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0539 - mse: 0.0539 - mae: 0.1890 - val_loss: 0.0777 - val_mse: 0.0777 - val_mae: 0.2154 - lr: 0.0010 - 378ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.07773 to 0.04901, saving model to LSTM3.h5
90/90 - 0s - loss: 0.0338 - mse: 0.0338 - mae: 0.1354 - val_loss: 0.0490 - val_mse: 0.0490 - val_mae: 0.1713 - lr: 0.0010 - 382ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0678 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.1756 - lr: 0.0010 - 364ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0622 - val_loss: 0.0663 - val_mse: 0.0663 - val_mae: 0.2078 - lr: 0.0010 - 376ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0610 - val_loss: 0.0561 - val_mse: 0.0561 - val_mae: 0.1890 - lr: 0.0010 - 359ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0627 - val_loss: 0.0839 - val_mse: 0.0839 - val_mae: 0.2436 - lr: 0.0010 - 380ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00009: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0646 - val_loss: 0.0571 - val_mse: 0.0571 - val_mae: 0.1908 - lr: 0.0010 - 374ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0101 - mse: 0.0101 - mae: 0.0831 - val_loss: 0.0584 - val_mse: 0.0584 - val_mae: 0.1948 - lr: 1.0000e-04 - 372ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0715 - val_loss: 0.0543 - val_mse: 0.0543 - val_mae: 0.1852 - lr: 1.0000e-04 - 354ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0662 - val_loss: 0.0541 - val_mse: 0.0541 - val_mae: 0.1841 - lr: 1.0000e-04 - 381ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0671 - val_loss: 0.0554 - val_mse: 0.0554 - val_mae: 0.1864 - lr: 1.0000e-04 - 369ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00014: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0632 - val_loss: 0.0577 - val_mse: 0.0577 - val_mae: 0.1909 - lr: 1.0000e-04 - 439ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.0580 - val_mse: 0.0580 - val_mae: 0.1916 - lr: 1.0000e-05 - 393ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0582 - val_mse: 0.0582 - val_mae: 0.1920 - lr: 1.0000e-05 - 379ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.0585 - val_mse: 0.0585 - val_mae: 0.1926 - lr: 1.0000e-05 - 379ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.0589 - val_mse: 0.0589 - val_mae: 0.1934 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00019: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0593 - val_mse: 0.0593 - val_mae: 0.1942 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.0594 - val_mse: 0.0594 - val_mae: 0.1944 - lr: 1.0000e-05 - 358ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0566 - val_loss: 0.0594 - val_mse: 0.0594 - val_mae: 0.1945 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0600 - val_mse: 0.0600 - val_mae: 0.1956 - lr: 1.0000e-05 - 422ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0527 - val_loss: 0.0603 - val_mse: 0.0603 - val_mae: 0.1961 - lr: 1.0000e-05 - 358ms/epoch - 4ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0566 - val_loss: 0.0608 - val_mse: 0.0608 - val_mae: 0.1973 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0565 - val_loss: 0.0612 - val_mse: 0.0612 - val_mae: 0.1980 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.0614 - val_mse: 0.0614 - val_mae: 0.1985 - lr: 1.0000e-05 - 374ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0558 - val_loss: 0.0620 - val_mse: 0.0620 - val_mae: 0.1996 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0545 - val_loss: 0.0621 - val_mse: 0.0621 - val_mae: 0.2000 - lr: 1.0000e-05 - 372ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0559 - val_loss: 0.0623 - val_mse: 0.0623 - val_mae: 0.2003 - lr: 1.0000e-05 - 355ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0623 - val_mse: 0.0623 - val_mae: 0.2003 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.0624 - val_mse: 0.0624 - val_mae: 0.2003 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0624 - val_mse: 0.0624 - val_mae: 0.2003 - lr: 1.0000e-05 - 359ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0560 - val_loss: 0.0625 - val_mse: 0.0625 - val_mae: 0.2006 - lr: 1.0000e-05 - 382ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.0626 - val_mse: 0.0626 - val_mae: 0.2008 - lr: 1.0000e-05 - 353ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0543 - val_loss: 0.0629 - val_mse: 0.0629 - val_mae: 0.2012 - lr: 1.0000e-05 - 362ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.0632 - val_mse: 0.0632 - val_mae: 0.2019 - lr: 1.0000e-05 - 386ms/epoch - 4ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0634 - val_mse: 0.0634 - val_mae: 0.2023 - lr: 1.0000e-05 - 367ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.0638 - val_mse: 0.0638 - val_mae: 0.2029 - lr: 1.0000e-05 - 446ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0551 - val_loss: 0.0639 - val_mse: 0.0639 - val_mae: 0.2032 - lr: 1.0000e-05 - 440ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0646 - val_mse: 0.0646 - val_mae: 0.2046 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0647 - val_mse: 0.0647 - val_mae: 0.2047 - lr: 1.0000e-05 - 382ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0527 - val_loss: 0.0653 - val_mse: 0.0653 - val_mae: 0.2060 - lr: 1.0000e-05 - 394ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0533 - val_loss: 0.0657 - val_mse: 0.0657 - val_mae: 0.2068 - lr: 1.0000e-05 - 364ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0501 - val_loss: 0.0663 - val_mse: 0.0663 - val_mae: 0.2079 - lr: 1.0000e-05 - 360ms/epoch - 4ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0512 - val_loss: 0.0664 - val_mse: 0.0664 - val_mae: 0.2082 - lr: 1.0000e-05 - 354ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0531 - val_loss: 0.0665 - val_mse: 0.0665 - val_mae: 0.2083 - lr: 1.0000e-05 - 365ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0524 - val_loss: 0.0670 - val_mse: 0.0670 - val_mae: 0.2093 - lr: 1.0000e-05 - 367ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0509 - val_loss: 0.0678 - val_mse: 0.0678 - val_mae: 0.2108 - lr: 1.0000e-05 - 352ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.0678 - val_mse: 0.0678 - val_mae: 0.2109 - lr: 1.0000e-05 - 360ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0683 - val_mse: 0.0683 - val_mae: 0.2118 - lr: 1.0000e-05 - 370ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0686 - val_mse: 0.0686 - val_mae: 0.2123 - lr: 1.0000e-05 - 358ms/epoch - 4ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0509 - val_loss: 0.0690 - val_mse: 0.0690 - val_mae: 0.2132 - lr: 1.0000e-05 - 360ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0513 - val_loss: 0.0694 - val_mse: 0.0694 - val_mae: 0.2139 - lr: 1.0000e-05 - 379ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.04901
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0515 - val_loss: 0.0698 - val_mse: 0.0698 - val_mae: 0.2147 - lr: 1.0000e-05 - 363ms/epoch - 4ms/step
Epoch 00054: early stopping
SMA Prediction vs Close: 51.49% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 32.725767438505336 RMSE: 5.7206439706125165 MAPE: 4.798603095387009 EMA Prediction vs Close: 54.85% Accuracy Prediction vs Prediction: 45.9% Accuracy MSE: 143.9522591181831 RMSE: 11.998010631691534 MAPE: 10.07848404711658 WMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 24.586224407987817 RMSE: 4.958449798877449 MAPE: 3.970226889097132 DEMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 51.49% Accuracy MSE: 207.2547601932076 RMSE: 14.3963453762824 MAPE: 12.894635987621164 KAMA Prediction vs Close: 50.75% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 23.743754657069395 RMSE: 4.872756371610364 MAPE: 3.7850733762502107 MIDPOINT Prediction vs Close: 50.0% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 35.62442093531873 RMSE: 5.968619684258559 MAPE: 5.0490603478808165 T3 Prediction vs Close: 56.34% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 103.73535640918065 RMSE: 10.185055542763655 MAPE: 8.016244139827235 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 47.76% Accuracy MSE: 39.894741171517865 RMSE: 6.316228397668807 MAPE: 5.481705479796751 Runtime: mins: 12.72821348606667
from google.colab import files
import cv2
uploaded = files.upload()
img = cv2.imread('Experiment3.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5c4399110>
for i in range(len(list(simulation3.keys()))):
SIM = list(simulation3.keys())[i]
plot_train(simulation3,SIM)
plot_test(simulation3,SIM)
----- Train RMSE for SMA ----- 9.155996008701845 ----- Train_MSE_LSTM for SMA ----- 83.8322629113641 ----- Train MAE LSTM for SMA ----- 8.009066904332526
----- Test RMSE for SMA----- 5.7206439706125165 ----- Test_MSE_LSTM for SMA----- 32.725767438505336 ----- Test_MAE_LSTM for SMA----- 4.798603095387009
----- Train RMSE for EMA ----- 10.577937809273275 ----- Train_MSE_LSTM for EMA ----- 111.89276829685309 ----- Train MAE LSTM for EMA ----- 9.432194934544697
----- Test RMSE for EMA----- 11.998010631691534 ----- Test_MSE_LSTM for EMA----- 143.9522591181831 ----- Test_MAE_LSTM for EMA----- 10.07848404711658
----- Train RMSE for WMA ----- 11.6464497844428 ----- Train_MSE_LSTM for WMA ----- 135.63979258154774 ----- Train MAE LSTM for WMA ----- 10.44059237611322
----- Test RMSE for WMA----- 4.958449798877449 ----- Test_MSE_LSTM for WMA----- 24.586224407987817 ----- Test_MAE_LSTM for WMA----- 3.970226889097132
----- Train RMSE for DEMA ----- 12.798003878729565 ----- Train_MSE_LSTM for DEMA ----- 163.78890327997698 ----- Train MAE LSTM for DEMA ----- 11.593951105370675
----- Test RMSE for DEMA----- 14.3963453762824 ----- Test_MSE_LSTM for DEMA----- 207.2547601932076 ----- Test_MAE_LSTM for DEMA----- 12.894635987621164
----- Train RMSE for KAMA ----- 10.917779337092348 ----- Train_MSE_LSTM for KAMA ----- 119.19790565344064 ----- Train MAE LSTM for KAMA ----- 9.91727143279415
----- Test RMSE for KAMA----- 4.872756371610364 ----- Test_MSE_LSTM for KAMA----- 23.743754657069395 ----- Test_MAE_LSTM for KAMA----- 3.7850733762502107
----- Train RMSE for MIDPOINT ----- 9.672965115616142 ----- Train_MSE_LSTM for MIDPOINT ----- 93.56625412792681 ----- Train MAE LSTM for MIDPOINT ----- 8.607510758185814
----- Test RMSE for MIDPOINT----- 5.968619684258559 ----- Test_MSE_LSTM for MIDPOINT----- 35.62442093531873 ----- Test_MAE_LSTM for MIDPOINT----- 5.0490603478808165
----- Train RMSE for T3 ----- 12.322513146591183 ----- Train_MSE_LSTM for T3 ----- 151.84433024791252 ----- Train MAE LSTM for T3 ----- 11.162838602510032
----- Test RMSE for T3----- 10.185055542763655 ----- Test_MSE_LSTM for T3----- 103.73535640918065 ----- Test_MAE_LSTM for T3----- 8.016244139827235
----- Train RMSE for TEMA ----- 7.464720413063123 ----- Train_MSE_LSTM for TEMA ----- 55.72205084520128 ----- Train MAE LSTM for TEMA ----- 5.129409694220032
----- Test RMSE for TEMA----- 6.316228397668807 ----- Test_MSE_LSTM for TEMA----- 39.894741171517865 ----- Test_MAE_LSTM for TEMA----- 5.481705479796751
From the above experiments it is evident that with Higher moving averages the loss plots show unreoresented data and underfitting, hence keeping only the MA's that have smaller periods like T3 OR TRIMA. Going forward EMA, WMA & DEMA will be ignored.
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# # Option 1
# # Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # Option 3
# # define custom activation
# # reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 4
# Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
model.add(LSTM(units=int(lstm_len/2)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam')
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM4.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = (y_scaler.inverse_transform(predictiontr)-det).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte =( y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation4 = {}
imgfile = 'Experiment4'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation4[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation4_data.json', 'w') as fp:
json.dump(simulation4, fp)
for ma in simulation4.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation4[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation4[ma]['final']['mse'],
'\nRMSE:\t', simulation4[ma]['final']['rmse'],
'\nMAPE:\t', simulation4[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.54 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4157.020, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3687.148, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.19 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3458.651, Time=0.08 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3322.133, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.76 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.82 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3324.133, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.749 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1657.067
Date: Sun, 12 Dec 2021 AIC 3322.133
Time: 15:18:11 BIC 3340.897
Sample: 0 HQIC 3329.339
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1966 0.003 -387.226 0.000 -1.203 -1.191
ar.L2 -0.8952 0.006 -138.692 0.000 -0.908 -0.883
ar.L3 -0.3968 0.006 -68.284 0.000 -0.408 -0.385
sigma2 3.5858 0.017 214.535 0.000 3.553 3.619
===================================================================================
Ljung-Box (L1) (Q): 14.47 Jarque-Bera (JB): 2428881.42
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 271.99
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.06157, saving model to LSTM4.h5
48/48 - 4s - loss: 1.3923 - val_loss: 0.0616 - lr: 0.0010 - 4s/epoch - 77ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.06157
48/48 - 0s - loss: 1.2621 - val_loss: 0.0663 - lr: 0.0010 - 263ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.06157
48/48 - 0s - loss: 1.1314 - val_loss: 0.0750 - lr: 0.0010 - 252ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.06157
48/48 - 0s - loss: 1.0246 - val_loss: 0.0868 - lr: 0.0010 - 278ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.9411 - val_loss: 0.0972 - lr: 0.0010 - 255ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8738 - val_loss: 0.1044 - lr: 0.0010 - 246ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8332 - val_loss: 0.1049 - lr: 1.0000e-04 - 243ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8268 - val_loss: 0.1054 - lr: 1.0000e-04 - 275ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8207 - val_loss: 0.1059 - lr: 1.0000e-04 - 243ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8147 - val_loss: 0.1064 - lr: 1.0000e-04 - 267ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8089 - val_loss: 0.1069 - lr: 1.0000e-04 - 261ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8053 - val_loss: 0.1070 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8047 - val_loss: 0.1070 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8042 - val_loss: 0.1071 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8036 - val_loss: 0.1071 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8031 - val_loss: 0.1072 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8025 - val_loss: 0.1073 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8019 - val_loss: 0.1073 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8014 - val_loss: 0.1074 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8008 - val_loss: 0.1075 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.8002 - val_loss: 0.1076 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7997 - val_loss: 0.1076 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7991 - val_loss: 0.1077 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7985 - val_loss: 0.1078 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7980 - val_loss: 0.1079 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7974 - val_loss: 0.1079 - lr: 1.0000e-05 - 256ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7968 - val_loss: 0.1080 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7963 - val_loss: 0.1081 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7957 - val_loss: 0.1082 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7951 - val_loss: 0.1083 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7946 - val_loss: 0.1084 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7940 - val_loss: 0.1084 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7934 - val_loss: 0.1085 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7929 - val_loss: 0.1086 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7923 - val_loss: 0.1087 - lr: 1.0000e-05 - 254ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7917 - val_loss: 0.1088 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7912 - val_loss: 0.1089 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7906 - val_loss: 0.1090 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7901 - val_loss: 0.1091 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7895 - val_loss: 0.1091 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7889 - val_loss: 0.1092 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7884 - val_loss: 0.1093 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7878 - val_loss: 0.1094 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7873 - val_loss: 0.1095 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7867 - val_loss: 0.1096 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7862 - val_loss: 0.1097 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7856 - val_loss: 0.1098 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7850 - val_loss: 0.1099 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7845 - val_loss: 0.1100 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7839 - val_loss: 0.1101 - lr: 1.0000e-05 - 260ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.06157
48/48 - 0s - loss: 0.7834 - val_loss: 0.1102 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.42 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4231.556, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3761.238, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.28 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3532.227, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3394.496, Time=0.10 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.87 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.64 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3396.496, Time=0.22 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 2.687 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1693.248
Date: Sun, 12 Dec 2021 AIC 3394.496
Time: 15:19:44 BIC 3413.260
Sample: 0 HQIC 3401.702
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.569 0.000 -1.204 -1.192
ar.L2 -0.8976 0.006 -139.811 0.000 -0.910 -0.885
ar.L3 -0.3984 0.006 -68.662 0.000 -0.410 -0.387
sigma2 3.9230 0.018 215.372 0.000 3.887 3.959
===================================================================================
Ljung-Box (L1) (Q): 14.54 Jarque-Bera (JB): 2462173.05
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.82
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05562, saving model to LSTM4.h5
16/16 - 3s - loss: 1.4917 - val_loss: 0.0556 - lr: 0.0010 - 3s/epoch - 218ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.05562 to 0.05481, saving model to LSTM4.h5
16/16 - 0s - loss: 1.3932 - val_loss: 0.0548 - lr: 0.0010 - 146ms/epoch - 9ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.05481 to 0.05412, saving model to LSTM4.h5
16/16 - 0s - loss: 1.2699 - val_loss: 0.0541 - lr: 0.0010 - 129ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.05412 to 0.05405, saving model to LSTM4.h5
16/16 - 0s - loss: 1.1744 - val_loss: 0.0540 - lr: 0.0010 - 131ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.1194 - val_loss: 0.0546 - lr: 0.0010 - 105ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0830 - val_loss: 0.0556 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0545 - val_loss: 0.0569 - lr: 0.0010 - 93ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00008: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0300 - val_loss: 0.0583 - lr: 0.0010 - 106ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0154 - val_loss: 0.0584 - lr: 1.0000e-04 - 92ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0133 - val_loss: 0.0586 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0112 - val_loss: 0.0587 - lr: 1.0000e-04 - 113ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0091 - val_loss: 0.0589 - lr: 1.0000e-04 - 109ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00013: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0071 - val_loss: 0.0591 - lr: 1.0000e-04 - 111ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0058 - val_loss: 0.0591 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0056 - val_loss: 0.0591 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0053 - val_loss: 0.0591 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0051 - val_loss: 0.0591 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00018: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0049 - val_loss: 0.0592 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0047 - val_loss: 0.0592 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0045 - val_loss: 0.0592 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0043 - val_loss: 0.0592 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0041 - val_loss: 0.0592 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0039 - val_loss: 0.0593 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0037 - val_loss: 0.0593 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0035 - val_loss: 0.0593 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0033 - val_loss: 0.0593 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0030 - val_loss: 0.0593 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0028 - val_loss: 0.0593 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0026 - val_loss: 0.0594 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0024 - val_loss: 0.0594 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0022 - val_loss: 0.0594 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0020 - val_loss: 0.0594 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0018 - val_loss: 0.0594 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0016 - val_loss: 0.0595 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0013 - val_loss: 0.0595 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0011 - val_loss: 0.0595 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0009 - val_loss: 0.0595 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0007 - val_loss: 0.0596 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0005 - val_loss: 0.0596 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0003 - val_loss: 0.0596 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05405
16/16 - 0s - loss: 1.0000 - val_loss: 0.0596 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9998 - val_loss: 0.0596 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9996 - val_loss: 0.0597 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9994 - val_loss: 0.0597 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9992 - val_loss: 0.0597 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9989 - val_loss: 0.0597 - lr: 1.0000e-05 - 144ms/epoch - 9ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9987 - val_loss: 0.0597 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9985 - val_loss: 0.0598 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9983 - val_loss: 0.0598 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9981 - val_loss: 0.0598 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9979 - val_loss: 0.0598 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9976 - val_loss: 0.0599 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9974 - val_loss: 0.0599 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.05405
16/16 - 0s - loss: 0.9972 - val_loss: 0.0599 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 00054: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 48.88% Accuracy
MSE: 31.621751516368622
RMSE: 5.623322106759368
MAPE: 4.355106062590965
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.43 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4264.089, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3793.930, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.24 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3564.923, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3427.258, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.30 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.46 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3429.258, Time=0.69 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.352 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1709.629
Date: Sun, 12 Dec 2021 AIC 3427.258
Time: 15:21:04 BIC 3446.021
Sample: 0 HQIC 3434.464
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1981 0.003 -389.386 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.699 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.737 0.000 -0.410 -0.387
sigma2 4.0860 0.019 215.311 0.000 4.049 4.123
===================================================================================
Ljung-Box (L1) (Q): 14.57 Jarque-Bera (JB): 2460901.70
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 273.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04594, saving model to LSTM4.h5
17/17 - 4s - loss: 1.3787 - val_loss: 0.0459 - lr: 0.0010 - 4s/epoch - 208ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.3437 - val_loss: 0.0469 - lr: 0.0010 - 93ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.3089 - val_loss: 0.0478 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.2753 - val_loss: 0.0487 - lr: 0.0010 - 107ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.2443 - val_loss: 0.0495 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.2162 - val_loss: 0.0506 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1992 - val_loss: 0.0507 - lr: 1.0000e-04 - 112ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1968 - val_loss: 0.0508 - lr: 1.0000e-04 - 105ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1943 - val_loss: 0.0509 - lr: 1.0000e-04 - 107ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1919 - val_loss: 0.0511 - lr: 1.0000e-04 - 105ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1895 - val_loss: 0.0512 - lr: 1.0000e-04 - 104ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1880 - val_loss: 0.0512 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1878 - val_loss: 0.0512 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1875 - val_loss: 0.0512 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1873 - val_loss: 0.0513 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1870 - val_loss: 0.0513 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1868 - val_loss: 0.0513 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1866 - val_loss: 0.0513 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1863 - val_loss: 0.0513 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1861 - val_loss: 0.0513 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1859 - val_loss: 0.0514 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1856 - val_loss: 0.0514 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1854 - val_loss: 0.0514 - lr: 1.0000e-05 - 120ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1851 - val_loss: 0.0514 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1849 - val_loss: 0.0514 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1847 - val_loss: 0.0514 - lr: 1.0000e-05 - 123ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1844 - val_loss: 0.0514 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1842 - val_loss: 0.0515 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1839 - val_loss: 0.0515 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1837 - val_loss: 0.0515 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1835 - val_loss: 0.0515 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1832 - val_loss: 0.0515 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1830 - val_loss: 0.0516 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1827 - val_loss: 0.0516 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1825 - val_loss: 0.0516 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1823 - val_loss: 0.0516 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1820 - val_loss: 0.0516 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1818 - val_loss: 0.0516 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1815 - val_loss: 0.0517 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1813 - val_loss: 0.0517 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1811 - val_loss: 0.0517 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1808 - val_loss: 0.0517 - lr: 1.0000e-05 - 126ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1806 - val_loss: 0.0517 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1803 - val_loss: 0.0518 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1801 - val_loss: 0.0518 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1799 - val_loss: 0.0518 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1796 - val_loss: 0.0518 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1794 - val_loss: 0.0518 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1792 - val_loss: 0.0519 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1789 - val_loss: 0.0519 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04594
17/17 - 0s - loss: 1.1787 - val_loss: 0.0519 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 48.88% Accuracy
MSE: 31.621751516368622
RMSE: 5.623322106759368
MAPE: 4.355106062590965
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 52.4753296205182
RMSE: 7.2439857551294375
MAPE: 5.852253139584933
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.44 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4436.126, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3965.317, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3736.589, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3598.951, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=0.95 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.94 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3600.951, Time=0.18 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.101 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1795.475
Date: Sun, 12 Dec 2021 AIC 3598.951
Time: 15:22:26 BIC 3617.714
Sample: 0 HQIC 3606.157
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1983 0.003 -389.581 0.000 -1.204 -1.192
ar.L2 -0.8973 0.006 -139.732 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.649 0.000 -0.410 -0.387
sigma2 5.0573 0.023 215.292 0.000 5.011 5.103
===================================================================================
Ljung-Box (L1) (Q): 14.41 Jarque-Bera (JB): 2460553.80
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.89
Prob(H) (two-sided): 0.00 Kurtosis: 273.74
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04142, saving model to LSTM4.h5
10/10 - 4s - loss: 1.3181 - val_loss: 0.0414 - lr: 0.0010 - 4s/epoch - 374ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.04142 to 0.04094, saving model to LSTM4.h5
10/10 - 0s - loss: 1.2515 - val_loss: 0.0409 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.04094 to 0.04051, saving model to LSTM4.h5
10/10 - 0s - loss: 1.1982 - val_loss: 0.0405 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.04051 to 0.04014, saving model to LSTM4.h5
10/10 - 0s - loss: 1.1531 - val_loss: 0.0401 - lr: 0.0010 - 81ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.04014 to 0.03983, saving model to LSTM4.h5
10/10 - 0s - loss: 1.1133 - val_loss: 0.0398 - lr: 0.0010 - 98ms/epoch - 10ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.03983 to 0.03959, saving model to LSTM4.h5
10/10 - 0s - loss: 1.0771 - val_loss: 0.0396 - lr: 0.0010 - 85ms/epoch - 8ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.03959 to 0.03942, saving model to LSTM4.h5
10/10 - 0s - loss: 1.0439 - val_loss: 0.0394 - lr: 0.0010 - 80ms/epoch - 8ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.03942 to 0.03934, saving model to LSTM4.h5
10/10 - 0s - loss: 1.0138 - val_loss: 0.0393 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9873 - val_loss: 0.0394 - lr: 0.0010 - 90ms/epoch - 9ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9643 - val_loss: 0.0395 - lr: 0.0010 - 106ms/epoch - 11ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9445 - val_loss: 0.0397 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 12/500
Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00012: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9272 - val_loss: 0.0400 - lr: 0.0010 - 84ms/epoch - 8ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9162 - val_loss: 0.0400 - lr: 1.0000e-04 - 79ms/epoch - 8ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9148 - val_loss: 0.0401 - lr: 1.0000e-04 - 77ms/epoch - 8ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9134 - val_loss: 0.0401 - lr: 1.0000e-04 - 88ms/epoch - 9ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9120 - val_loss: 0.0401 - lr: 1.0000e-04 - 79ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00017: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9106 - val_loss: 0.0402 - lr: 1.0000e-04 - 71ms/epoch - 7ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9096 - val_loss: 0.0402 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9095 - val_loss: 0.0402 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9093 - val_loss: 0.0402 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9092 - val_loss: 0.0402 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 22/500
Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00022: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9091 - val_loss: 0.0402 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9089 - val_loss: 0.0402 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9088 - val_loss: 0.0402 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9087 - val_loss: 0.0402 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9085 - val_loss: 0.0402 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9084 - val_loss: 0.0402 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9082 - val_loss: 0.0402 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9081 - val_loss: 0.0402 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9079 - val_loss: 0.0402 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9078 - val_loss: 0.0402 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9077 - val_loss: 0.0402 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9075 - val_loss: 0.0402 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9074 - val_loss: 0.0403 - lr: 1.0000e-05 - 95ms/epoch - 10ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9072 - val_loss: 0.0403 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9071 - val_loss: 0.0403 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9069 - val_loss: 0.0403 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9068 - val_loss: 0.0403 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9066 - val_loss: 0.0403 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9065 - val_loss: 0.0403 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9064 - val_loss: 0.0403 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9062 - val_loss: 0.0403 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9061 - val_loss: 0.0403 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9059 - val_loss: 0.0403 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9058 - val_loss: 0.0403 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9056 - val_loss: 0.0403 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9055 - val_loss: 0.0403 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9053 - val_loss: 0.0403 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9052 - val_loss: 0.0403 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9050 - val_loss: 0.0403 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9049 - val_loss: 0.0403 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9047 - val_loss: 0.0403 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9046 - val_loss: 0.0404 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9044 - val_loss: 0.0404 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9043 - val_loss: 0.0404 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9041 - val_loss: 0.0404 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9040 - val_loss: 0.0404 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03934
10/10 - 0s - loss: 0.9038 - val_loss: 0.0404 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 00058: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 48.88% Accuracy
MSE: 31.621751516368622
RMSE: 5.623322106759368
MAPE: 4.355106062590965
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 52.4753296205182
RMSE: 7.2439857551294375
MAPE: 5.852253139584933
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 146.44755629127866
RMSE: 12.10155181335347
MAPE: 10.943210296434415
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4190.464, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3724.371, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.29 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3494.154, Time=0.07 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3357.435, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.17 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.74 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3359.435, Time=0.21 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.041 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1674.717
Date: Sun, 12 Dec 2021 AIC 3357.435
Time: 15:23:44 BIC 3376.198
Sample: 0 HQIC 3364.641
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1955 0.003 -381.246 0.000 -1.202 -1.189
ar.L2 -0.8964 0.007 -135.835 0.000 -0.909 -0.883
ar.L3 -0.3971 0.006 -67.229 0.000 -0.409 -0.385
sigma2 3.7466 0.018 211.623 0.000 3.712 3.781
===================================================================================
Ljung-Box (L1) (Q): 14.20 Jarque-Bera (JB): 2338363.32
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 3.76
Prob(H) (two-sided): 0.00 Kurtosis: 266.93
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05310, saving model to LSTM4.h5
45/45 - 4s - loss: 1.2968 - val_loss: 0.0531 - lr: 0.0010 - 4s/epoch - 90ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05310
45/45 - 0s - loss: 1.0725 - val_loss: 0.0568 - lr: 0.0010 - 315ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.9746 - val_loss: 0.0608 - lr: 0.0010 - 274ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.9066 - val_loss: 0.0645 - lr: 0.0010 - 248ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.8542 - val_loss: 0.0681 - lr: 0.0010 - 269ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.8135 - val_loss: 0.0717 - lr: 0.0010 - 257ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7928 - val_loss: 0.0721 - lr: 1.0000e-04 - 266ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7898 - val_loss: 0.0724 - lr: 1.0000e-04 - 289ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7868 - val_loss: 0.0728 - lr: 1.0000e-04 - 285ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7838 - val_loss: 0.0732 - lr: 1.0000e-04 - 306ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7808 - val_loss: 0.0736 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7790 - val_loss: 0.0737 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7787 - val_loss: 0.0737 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7784 - val_loss: 0.0738 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7781 - val_loss: 0.0738 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7778 - val_loss: 0.0739 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7774 - val_loss: 0.0739 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7771 - val_loss: 0.0740 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7768 - val_loss: 0.0740 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7765 - val_loss: 0.0740 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7761 - val_loss: 0.0741 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7758 - val_loss: 0.0742 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7755 - val_loss: 0.0742 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7751 - val_loss: 0.0743 - lr: 1.0000e-05 - 286ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7748 - val_loss: 0.0743 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7744 - val_loss: 0.0744 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7741 - val_loss: 0.0744 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7737 - val_loss: 0.0745 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7734 - val_loss: 0.0745 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7730 - val_loss: 0.0746 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7727 - val_loss: 0.0746 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7723 - val_loss: 0.0747 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7720 - val_loss: 0.0748 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7716 - val_loss: 0.0748 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7713 - val_loss: 0.0749 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7709 - val_loss: 0.0749 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7706 - val_loss: 0.0750 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7702 - val_loss: 0.0751 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7698 - val_loss: 0.0751 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7695 - val_loss: 0.0752 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7691 - val_loss: 0.0753 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7688 - val_loss: 0.0753 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7684 - val_loss: 0.0754 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7681 - val_loss: 0.0755 - lr: 1.0000e-05 - 237ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7677 - val_loss: 0.0755 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7673 - val_loss: 0.0756 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7670 - val_loss: 0.0757 - lr: 1.0000e-05 - 239ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7666 - val_loss: 0.0757 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7663 - val_loss: 0.0758 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7659 - val_loss: 0.0759 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05310
45/45 - 0s - loss: 0.7655 - val_loss: 0.0759 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 48.88% Accuracy
MSE: 31.621751516368622
RMSE: 5.623322106759368
MAPE: 4.355106062590965
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 52.4753296205182
RMSE: 7.2439857551294375
MAPE: 5.852253139584933
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 146.44755629127866
RMSE: 12.10155181335347
MAPE: 10.943210296434415
KAMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 19.64215945229788
RMSE: 4.4319475913302355
MAPE: 3.5686191181651687
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4212.289, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3747.746, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.23 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3523.401, Time=0.08 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3387.759, Time=0.12 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.30 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.88 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3389.758, Time=0.24 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.287 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1689.879
Date: Sun, 12 Dec 2021 AIC 3387.759
Time: 15:25:11 BIC 3406.522
Sample: 0 HQIC 3394.964
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1878 0.003 -345.315 0.000 -1.195 -1.181
ar.L2 -0.8876 0.007 -121.809 0.000 -0.902 -0.873
ar.L3 -0.3957 0.007 -60.127 0.000 -0.409 -0.383
sigma2 3.8904 0.020 193.404 0.000 3.851 3.930
===================================================================================
Ljung-Box (L1) (Q): 13.21 Jarque-Bera (JB): 1659080.01
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.08 Skew: 3.28
Prob(H) (two-sided): 0.00 Kurtosis: 225.31
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04095, saving model to LSTM4.h5
58/58 - 4s - loss: 1.2983 - val_loss: 0.0410 - lr: 0.0010 - 4s/epoch - 63ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04095
58/58 - 0s - loss: 1.1355 - val_loss: 0.0436 - lr: 0.0010 - 310ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04095
58/58 - 0s - loss: 1.0175 - val_loss: 0.0468 - lr: 0.0010 - 329ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.9394 - val_loss: 0.0508 - lr: 0.0010 - 284ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8828 - val_loss: 0.0556 - lr: 0.0010 - 318ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8400 - val_loss: 0.0610 - lr: 0.0010 - 311ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8176 - val_loss: 0.0616 - lr: 1.0000e-04 - 310ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8144 - val_loss: 0.0622 - lr: 1.0000e-04 - 322ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8111 - val_loss: 0.0628 - lr: 1.0000e-04 - 332ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8078 - val_loss: 0.0634 - lr: 1.0000e-04 - 325ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8045 - val_loss: 0.0640 - lr: 1.0000e-04 - 309ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8025 - val_loss: 0.0641 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8022 - val_loss: 0.0641 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8018 - val_loss: 0.0642 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8015 - val_loss: 0.0643 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8011 - val_loss: 0.0644 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8008 - val_loss: 0.0644 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8004 - val_loss: 0.0645 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.8001 - val_loss: 0.0646 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7997 - val_loss: 0.0647 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7994 - val_loss: 0.0648 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7990 - val_loss: 0.0648 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7987 - val_loss: 0.0649 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7983 - val_loss: 0.0650 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7979 - val_loss: 0.0651 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7976 - val_loss: 0.0652 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7972 - val_loss: 0.0653 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7968 - val_loss: 0.0654 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7965 - val_loss: 0.0655 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7961 - val_loss: 0.0656 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7957 - val_loss: 0.0657 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7953 - val_loss: 0.0658 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7950 - val_loss: 0.0659 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7946 - val_loss: 0.0660 - lr: 1.0000e-05 - 333ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7942 - val_loss: 0.0661 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7938 - val_loss: 0.0662 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7935 - val_loss: 0.0663 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7931 - val_loss: 0.0664 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7927 - val_loss: 0.0665 - lr: 1.0000e-05 - 332ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7923 - val_loss: 0.0666 - lr: 1.0000e-05 - 344ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7920 - val_loss: 0.0667 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7916 - val_loss: 0.0668 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7912 - val_loss: 0.0669 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7908 - val_loss: 0.0670 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7905 - val_loss: 0.0671 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7901 - val_loss: 0.0672 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7897 - val_loss: 0.0674 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7893 - val_loss: 0.0675 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7890 - val_loss: 0.0676 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7886 - val_loss: 0.0677 - lr: 1.0000e-05 - 329ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04095
58/58 - 0s - loss: 0.7882 - val_loss: 0.0678 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 48.88% Accuracy
MSE: 31.621751516368622
RMSE: 5.623322106759368
MAPE: 4.355106062590965
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 52.4753296205182
RMSE: 7.2439857551294375
MAPE: 5.852253139584933
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 146.44755629127866
RMSE: 12.10155181335347
MAPE: 10.943210296434415
KAMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 19.64215945229788
RMSE: 4.4319475913302355
MAPE: 3.5686191181651687
MIDPOINT
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 19.83404242536117
RMSE: 4.453542682557468
MAPE: 3.5743844299716057
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.35 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4414.515, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3944.062, Time=0.04 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.36 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3715.173, Time=0.06 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3577.471, Time=0.09 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.39 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.60 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3579.471, Time=0.19 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.115 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1784.736
Date: Sun, 12 Dec 2021 AIC 3577.471
Time: 15:26:44 BIC 3596.235
Sample: 0 HQIC 3584.677
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1982 0.003 -389.844 0.000 -1.204 -1.192
ar.L2 -0.8974 0.006 -139.861 0.000 -0.910 -0.885
ar.L3 -0.3983 0.006 -68.862 0.000 -0.410 -0.387
sigma2 4.9242 0.023 215.469 0.000 4.879 4.969
===================================================================================
Ljung-Box (L1) (Q): 14.55 Jarque-Bera (JB): 2468024.38
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 3.90
Prob(H) (two-sided): 0.00 Kurtosis: 274.15
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04951, saving model to LSTM4.h5
43/43 - 4s - loss: 1.4093 - val_loss: 0.0495 - lr: 0.0010 - 4s/epoch - 83ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.3586 - val_loss: 0.0516 - lr: 0.0010 - 248ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.3003 - val_loss: 0.0540 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.2446 - val_loss: 0.0567 - lr: 0.0010 - 247ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.1921 - val_loss: 0.0597 - lr: 0.0010 - 240ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.1433 - val_loss: 0.0629 - lr: 0.0010 - 226ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.1152 - val_loss: 0.0632 - lr: 1.0000e-04 - 238ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.1110 - val_loss: 0.0636 - lr: 1.0000e-04 - 264ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.1069 - val_loss: 0.0639 - lr: 1.0000e-04 - 262ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.1029 - val_loss: 0.0643 - lr: 1.0000e-04 - 250ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0989 - val_loss: 0.0646 - lr: 1.0000e-04 - 223ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0964 - val_loss: 0.0647 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0960 - val_loss: 0.0647 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0956 - val_loss: 0.0647 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0952 - val_loss: 0.0648 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0948 - val_loss: 0.0648 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0944 - val_loss: 0.0649 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0940 - val_loss: 0.0649 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0936 - val_loss: 0.0649 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0932 - val_loss: 0.0650 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0928 - val_loss: 0.0650 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0924 - val_loss: 0.0651 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0920 - val_loss: 0.0651 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0916 - val_loss: 0.0651 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0912 - val_loss: 0.0652 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0908 - val_loss: 0.0652 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0904 - val_loss: 0.0653 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0900 - val_loss: 0.0653 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0896 - val_loss: 0.0654 - lr: 1.0000e-05 - 304ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0892 - val_loss: 0.0654 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0888 - val_loss: 0.0654 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0885 - val_loss: 0.0655 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0881 - val_loss: 0.0655 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0877 - val_loss: 0.0656 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0873 - val_loss: 0.0656 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0869 - val_loss: 0.0657 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0865 - val_loss: 0.0657 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0861 - val_loss: 0.0658 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0857 - val_loss: 0.0658 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0853 - val_loss: 0.0659 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0849 - val_loss: 0.0659 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0845 - val_loss: 0.0659 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0841 - val_loss: 0.0660 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0837 - val_loss: 0.0660 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0833 - val_loss: 0.0661 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0829 - val_loss: 0.0661 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0825 - val_loss: 0.0662 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0821 - val_loss: 0.0662 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0818 - val_loss: 0.0663 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0814 - val_loss: 0.0663 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04951
43/43 - 0s - loss: 1.0810 - val_loss: 0.0664 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 19.776724587061057
RMSE: 4.447102943159856
MAPE: 3.587879520041786
EMA
Prediction vs Close: 57.09% Accuracy
Prediction vs Prediction: 48.88% Accuracy
MSE: 31.621751516368622
RMSE: 5.623322106759368
MAPE: 4.355106062590965
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 52.4753296205182
RMSE: 7.2439857551294375
MAPE: 5.852253139584933
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 146.44755629127866
RMSE: 12.10155181335347
MAPE: 10.943210296434415
KAMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 19.64215945229788
RMSE: 4.4319475913302355
MAPE: 3.5686191181651687
MIDPOINT
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 45.52% Accuracy
MSE: 19.83404242536117
RMSE: 4.453542682557468
MAPE: 3.5743844299716057
T3
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 70.66866288490243
RMSE: 8.406465540576637
MAPE: 6.802843731006552
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=inf, Time=0.47 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=4352.703, Time=0.03 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=3889.412, Time=0.05 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=inf, Time=0.25 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=3689.930, Time=0.05 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=3574.245, Time=0.08 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=inf, Time=1.18 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=inf, Time=0.79 sec
ARIMA(3,3,0)(0,0,0)[0] intercept : AIC=3576.245, Time=0.18 sec
Best model: ARIMA(3,3,0)(0,0,0)[0]
Total fit time: 3.098 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 0) Log Likelihood -1783.123
Date: Sun, 12 Dec 2021 AIC 3574.245
Time: 15:28:06 BIC 3593.008
Sample: 0 HQIC 3581.451
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 -1.1480 0.004 -302.430 0.000 -1.155 -1.141
ar.L2 -0.8300 0.008 -99.682 0.000 -0.846 -0.814
ar.L3 -0.3687 0.007 -50.527 0.000 -0.383 -0.354
sigma2 4.9055 0.028 175.970 0.000 4.851 4.960
===================================================================================
Ljung-Box (L1) (Q): 11.61 Jarque-Bera (JB): 1261976.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.16 Skew: 2.52
Prob(H) (two-sided): 0.00 Kurtosis: 196.90
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
ARIMA order: (3, 3, 0)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04045, saving model to LSTM4.h5
90/90 - 4s - loss: 1.2159 - val_loss: 0.0404 - lr: 0.0010 - 4s/epoch - 47ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.9800 - val_loss: 0.0482 - lr: 0.0010 - 524ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.8617 - val_loss: 0.0639 - lr: 0.0010 - 443ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.8000 - val_loss: 0.0842 - lr: 0.0010 - 467ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.7583 - val_loss: 0.1038 - lr: 0.0010 - 502ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.7272 - val_loss: 0.1197 - lr: 0.0010 - 462ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.7111 - val_loss: 0.1210 - lr: 1.0000e-04 - 534ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.7087 - val_loss: 0.1224 - lr: 1.0000e-04 - 533ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.7062 - val_loss: 0.1238 - lr: 1.0000e-04 - 443ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.7037 - val_loss: 0.1251 - lr: 1.0000e-04 - 482ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.7012 - val_loss: 0.1265 - lr: 1.0000e-04 - 447ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6996 - val_loss: 0.1266 - lr: 1.0000e-05 - 451ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6993 - val_loss: 0.1268 - lr: 1.0000e-05 - 479ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6991 - val_loss: 0.1269 - lr: 1.0000e-05 - 437ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6988 - val_loss: 0.1271 - lr: 1.0000e-05 - 447ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6985 - val_loss: 0.1272 - lr: 1.0000e-05 - 584ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6982 - val_loss: 0.1274 - lr: 1.0000e-05 - 449ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6979 - val_loss: 0.1276 - lr: 1.0000e-05 - 440ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6976 - val_loss: 0.1277 - lr: 1.0000e-05 - 548ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6974 - val_loss: 0.1279 - lr: 1.0000e-05 - 516ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6971 - val_loss: 0.1281 - lr: 1.0000e-05 - 504ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6967 - val_loss: 0.1282 - lr: 1.0000e-05 - 515ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6964 - val_loss: 0.1284 - lr: 1.0000e-05 - 457ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6961 - val_loss: 0.1286 - lr: 1.0000e-05 - 534ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6958 - val_loss: 0.1288 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6955 - val_loss: 0.1290 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6952 - val_loss: 0.1291 - lr: 1.0000e-05 - 488ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6949 - val_loss: 0.1293 - lr: 1.0000e-05 - 480ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6946 - val_loss: 0.1295 - lr: 1.0000e-05 - 568ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6942 - val_loss: 0.1297 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6939 - val_loss: 0.1299 - lr: 1.0000e-05 - 439ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6936 - val_loss: 0.1301 - lr: 1.0000e-05 - 527ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6933 - val_loss: 0.1303 - lr: 1.0000e-05 - 483ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6929 - val_loss: 0.1305 - lr: 1.0000e-05 - 445ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6926 - val_loss: 0.1307 - lr: 1.0000e-05 - 471ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6923 - val_loss: 0.1309 - lr: 1.0000e-05 - 427ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6920 - val_loss: 0.1311 - lr: 1.0000e-05 - 481ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6916 - val_loss: 0.1313 - lr: 1.0000e-05 - 415ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6913 - val_loss: 0.1315 - lr: 1.0000e-05 - 481ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6910 - val_loss: 0.1317 - lr: 1.0000e-05 - 424ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6907 - val_loss: 0.1319 - lr: 1.0000e-05 - 533ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6903 - val_loss: 0.1321 - lr: 1.0000e-05 - 433ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6900 - val_loss: 0.1323 - lr: 1.0000e-05 - 525ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6897 - val_loss: 0.1324 - lr: 1.0000e-05 - 467ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6893 - val_loss: 0.1326 - lr: 1.0000e-05 - 469ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6890 - val_loss: 0.1328 - lr: 1.0000e-05 - 442ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6887 - val_loss: 0.1330 - lr: 1.0000e-05 - 520ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6883 - val_loss: 0.1332 - lr: 1.0000e-05 - 460ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6880 - val_loss: 0.1334 - lr: 1.0000e-05 - 461ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04045
90/90 - 0s - loss: 0.6877 - val_loss: 0.1336 - lr: 1.0000e-05 - 421ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04045
90/90 - 1s - loss: 0.6874 - val_loss: 0.1338 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 19.776724587061057 RMSE: 4.447102943159856 MAPE: 3.587879520041786 EMA Prediction vs Close: 57.09% Accuracy Prediction vs Prediction: 48.88% Accuracy MSE: 31.621751516368622 RMSE: 5.623322106759368 MAPE: 4.355106062590965 WMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 52.4753296205182 RMSE: 7.2439857551294375 MAPE: 5.852253139584933 DEMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 146.44755629127866 RMSE: 12.10155181335347 MAPE: 10.943210296434415 KAMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 19.64215945229788 RMSE: 4.4319475913302355 MAPE: 3.5686191181651687 MIDPOINT Prediction vs Close: 49.63% Accuracy Prediction vs Prediction: 45.52% Accuracy MSE: 19.83404242536117 RMSE: 4.453542682557468 MAPE: 3.5743844299716057 T3 Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 70.66866288490243 RMSE: 8.406465540576637 MAPE: 6.802843731006552 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 48.88% Accuracy MSE: 14.860699364166678 RMSE: 3.8549577642519868 MAPE: 3.1502795604602833 Runtime: mins: 11.725093292033337
from google.colab import files
import cv2
uploaded = files.upload()
img = cv2.imread('Experiment4.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5ea202a50>
for i in range(len(list(simulation4.keys()))):
SIM = list(simulation4.keys())[i]
plot_train(simulation4,SIM)
plot_test(simulation4,SIM)
----- Train RMSE for SMA ----- 2.7003156237977874 ----- Train_MSE_LSTM for SMA ----- 7.291704468126434 ----- Train MAE LSTM for SMA ----- 2.660792327163243
----- Test RMSE for SMA----- 4.447102943159856 ----- Test_MSE_LSTM for SMA----- 19.776724587061057 ----- Test_MAE_LSTM for SMA----- 3.587879520041786
----- Train RMSE for EMA ----- 1.713593446317321 ----- Train_MSE_LSTM for EMA ----- 2.9364024992616735 ----- Train MAE LSTM for EMA ----- 1.5223039660123314
----- Test RMSE for EMA----- 5.623322106759368 ----- Test_MSE_LSTM for EMA----- 31.621751516368622 ----- Test_MAE_LSTM for EMA----- 4.355106062590965
----- Train RMSE for WMA ----- 3.927450398854089 ----- Train_MSE_LSTM for WMA ----- 15.424866635459145 ----- Train MAE LSTM for WMA ----- 3.865711769255081
----- Test RMSE for WMA----- 7.2439857551294375 ----- Test_MSE_LSTM for WMA----- 52.4753296205182 ----- Test_MAE_LSTM for WMA----- 5.852253139584933
----- Train RMSE for DEMA ----- 2.0481672182170936 ----- Train_MSE_LSTM for DEMA ----- 4.194988953779147 ----- Train MAE LSTM for DEMA ----- 1.7103755544907977
----- Test RMSE for DEMA----- 12.10155181335347 ----- Test_MSE_LSTM for DEMA----- 146.44755629127866 ----- Test_MAE_LSTM for DEMA----- 10.943210296434415
----- Train RMSE for KAMA ----- 4.06602280300264 ----- Train_MSE_LSTM for KAMA ----- 16.532541434537443 ----- Train MAE LSTM for KAMA ----- 4.004387627733816
----- Test RMSE for KAMA----- 4.4319475913302355 ----- Test_MSE_LSTM for KAMA----- 19.64215945229788 ----- Test_MAE_LSTM for KAMA----- 3.5686191181651687
----- Train RMSE for MIDPOINT ----- 3.3807826675549606 ----- Train_MSE_LSTM for MIDPOINT ----- 11.429691445240035 ----- Train MAE LSTM for MIDPOINT ----- 3.34850282244163
----- Test RMSE for MIDPOINT----- 4.453542682557468 ----- Test_MSE_LSTM for MIDPOINT----- 19.83404242536117 ----- Test_MAE_LSTM for MIDPOINT----- 3.5743844299716057
----- Train RMSE for T3 ----- 3.7130600011370944 ----- Train_MSE_LSTM for T3 ----- 13.786814572044198 ----- Train MAE LSTM for T3 ----- 3.666137409682321
----- Test RMSE for T3----- 8.406465540576637 ----- Test_MSE_LSTM for T3----- 70.66866288490243 ----- Test_MAE_LSTM for T3----- 6.802843731006552
----- Train RMSE for TEMA ----- 1.286632285419522 ----- Train_MSE_LSTM for TEMA ----- 1.6554226378838626 ----- Train MAE LSTM for TEMA ----- 1.1888332532183958
----- Test RMSE for TEMA----- 3.8549577642519868 ----- Test_MSE_LSTM for TEMA----- 14.860699364166678 ----- Test_MAE_LSTM for TEMA----- 3.1502795604602833
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det = 20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
model.add(Dense(units=64,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
## Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM5.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation5 = {}
imgfile = 'Experiment5'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation5[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation5_data.json', 'w') as fp:
json.dump(simulation5, fp)
for ma in simulation5.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation5[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation5[ma]['final']['mse'],
'\nRMSE:\t', simulation5[ma]['final']['rmse'],
'\nMAPE:\t', simulation5[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.787, Time=3.85 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.588, Time=5.46 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14596.280, Time=5.73 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.588, Time=8.46 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16924.805, Time=10.67 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14482.349, Time=10.95 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17215.608, Time=21.20 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.588, Time=10.42 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15570.350, Time=19.26 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11671.292, Time=27.08 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 123.097 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8639.804
Date: Sun, 12 Dec 2021 AIC -17215.608
Time: 16:07:33 BIC -17065.501
Sample: 0 HQIC -17157.961
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.057e-09 5.82e-05 -6.97e-05 1.000 -0.000 0.000
x2 -4.057e-09 5.81e-05 -6.99e-05 1.000 -0.000 0.000
x3 -4.111e-09 5.49e-05 -7.49e-05 1.000 -0.000 0.000
x4 1.0000 5.71e-05 1.75e+04 0.000 1.000 1.000
x5 -3.706e-09 5.43e-05 -6.82e-05 1.000 -0.000 0.000
x6 -1.082e-08 0.000 -6.08e-05 1.000 -0.000 0.000
x7 -4.025e-09 5.63e-05 -7.15e-05 1.000 -0.000 0.000
x8 -4.035e-09 5.19e-05 -7.78e-05 1.000 -0.000 0.000
x9 -1.522e-10 2.9e-05 -5.25e-06 1.000 -5.68e-05 5.68e-05
x10 -6.396e-10 1.04e-05 -6.15e-05 1.000 -2.04e-05 2.04e-05
x11 -3.921e-09 5.06e-05 -7.75e-05 1.000 -9.91e-05 9.91e-05
x12 -4.102e-09 5.29e-05 -7.76e-05 1.000 -0.000 0.000
x13 -4.087e-09 5.75e-05 -7.11e-05 1.000 -0.000 0.000
x14 -3.619e-08 0.000 -0.000 1.000 -0.000 0.000
x15 -4.806e-09 4.61e-05 -0.000 1.000 -9.03e-05 9.03e-05
x16 -3.507e-09 0.000 -2.98e-05 1.000 -0.000 0.000
x17 -3.121e-09 6.02e-05 -5.18e-05 1.000 -0.000 0.000
x18 -1.172e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -5.433e-09 6.06e-05 -8.96e-05 1.000 -0.000 0.000
x20 -1.393e-08 4.79e-05 -0.000 1.000 -9.39e-05 9.39e-05
x21 -4.216e-09 6.63e-05 -6.36e-05 1.000 -0.000 0.000
x22 -3.479e-11 1.66e-08 -0.002 0.998 -3.25e-08 3.24e-08
x23 -9.221e-10 1.4e-07 -0.007 0.995 -2.74e-07 2.73e-07
x24 -8.085e-08 0.001 -6.96e-05 1.000 -0.002 0.002
x25 -9.642e-08 0.001 -0.000 1.000 -0.002 0.002
x26 -5.019e-08 0.000 -0.000 1.000 -0.000 0.000
x27 -2.457e-08 7.65e-05 -0.000 1.000 -0.000 0.000
x28 -3.411e-08 0.000 -0.000 1.000 -0.000 0.000
x29 -1.507e-08 4.36e-05 -0.000 1.000 -8.54e-05 8.54e-05
ma.L1 -1.3898 8.03e-07 -1.73e+06 0.000 -1.390 -1.390
ma.L2 0.4031 8.36e-07 4.82e+05 0.000 0.403 0.403
sigma2 7.528e-11 7.24e-11 1.040 0.298 -6.66e-11 2.17e-10
===================================================================================
Ljung-Box (L1) (Q): 89.12 Jarque-Bera (JB): 1533103.33
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.56
Prob(H) (two-sided): 0.00 Kurtosis: 216.50
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.08e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_57 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_57 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.25612, saving model to LSTM5.h5 48/48 - 2s - loss: 0.4353 - val_loss: 0.2561 - lr: 0.0010 - 2s/epoch - 39ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.25612 to 0.11825, saving model to LSTM5.h5 48/48 - 0s - loss: 0.1198 - val_loss: 0.1182 - lr: 0.0010 - 456ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.11825 to 0.11433, saving model to LSTM5.h5 48/48 - 0s - loss: 0.0779 - val_loss: 0.1143 - lr: 0.0010 - 432ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.11433 48/48 - 0s - loss: 0.0580 - val_loss: 0.1983 - lr: 0.0010 - 402ms/epoch - 8ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.11433 to 0.02870, saving model to LSTM5.h5 48/48 - 0s - loss: 0.0636 - val_loss: 0.0287 - lr: 0.0010 - 499ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0631 - val_loss: 0.5273 - lr: 0.0010 - 422ms/epoch - 9ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0476 - val_loss: 0.2411 - lr: 0.0010 - 411ms/epoch - 9ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0466 - val_loss: 0.1020 - lr: 0.0010 - 412ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0515 - val_loss: 0.4444 - lr: 0.0010 - 408ms/epoch - 8ms/step Epoch 10/500 Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00010: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0405 - val_loss: 0.2259 - lr: 0.0010 - 417ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0415 - val_loss: 0.2207 - lr: 1.0000e-04 - 379ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0369 - val_loss: 0.2140 - lr: 1.0000e-04 - 466ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0336 - val_loss: 0.2095 - lr: 1.0000e-04 - 387ms/epoch - 8ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0392 - val_loss: 0.2049 - lr: 1.0000e-04 - 417ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0362 - val_loss: 0.1977 - lr: 1.0000e-04 - 406ms/epoch - 8ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0357 - val_loss: 0.1975 - lr: 1.0000e-05 - 377ms/epoch - 8ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0379 - val_loss: 0.1971 - lr: 1.0000e-05 - 407ms/epoch - 8ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0373 - val_loss: 0.1969 - lr: 1.0000e-05 - 409ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0379 - val_loss: 0.1965 - lr: 1.0000e-05 - 446ms/epoch - 9ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0361 - val_loss: 0.1961 - lr: 1.0000e-05 - 406ms/epoch - 8ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0378 - val_loss: 0.1957 - lr: 1.0000e-05 - 415ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0350 - val_loss: 0.1953 - lr: 1.0000e-05 - 437ms/epoch - 9ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0357 - val_loss: 0.1945 - lr: 1.0000e-05 - 426ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0379 - val_loss: 0.1938 - lr: 1.0000e-05 - 401ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0388 - val_loss: 0.1927 - lr: 1.0000e-05 - 378ms/epoch - 8ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0341 - val_loss: 0.1916 - lr: 1.0000e-05 - 385ms/epoch - 8ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0353 - val_loss: 0.1913 - lr: 1.0000e-05 - 393ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0389 - val_loss: 0.1907 - lr: 1.0000e-05 - 400ms/epoch - 8ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0335 - val_loss: 0.1898 - lr: 1.0000e-05 - 405ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0348 - val_loss: 0.1892 - lr: 1.0000e-05 - 442ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0362 - val_loss: 0.1879 - lr: 1.0000e-05 - 386ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0377 - val_loss: 0.1861 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0366 - val_loss: 0.1856 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0359 - val_loss: 0.1856 - lr: 1.0000e-05 - 417ms/epoch - 9ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0345 - val_loss: 0.1849 - lr: 1.0000e-05 - 390ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0315 - val_loss: 0.1843 - lr: 1.0000e-05 - 405ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0351 - val_loss: 0.1835 - lr: 1.0000e-05 - 379ms/epoch - 8ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0342 - val_loss: 0.1831 - lr: 1.0000e-05 - 407ms/epoch - 8ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0343 - val_loss: 0.1819 - lr: 1.0000e-05 - 481ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0327 - val_loss: 0.1816 - lr: 1.0000e-05 - 379ms/epoch - 8ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0316 - val_loss: 0.1806 - lr: 1.0000e-05 - 417ms/epoch - 9ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0349 - val_loss: 0.1790 - lr: 1.0000e-05 - 432ms/epoch - 9ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0354 - val_loss: 0.1780 - lr: 1.0000e-05 - 416ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0307 - val_loss: 0.1776 - lr: 1.0000e-05 - 458ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0325 - val_loss: 0.1778 - lr: 1.0000e-05 - 490ms/epoch - 10ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0364 - val_loss: 0.1772 - lr: 1.0000e-05 - 398ms/epoch - 8ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0339 - val_loss: 0.1769 - lr: 1.0000e-05 - 464ms/epoch - 10ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0331 - val_loss: 0.1767 - lr: 1.0000e-05 - 430ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0326 - val_loss: 0.1763 - lr: 1.0000e-05 - 376ms/epoch - 8ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0328 - val_loss: 0.1755 - lr: 1.0000e-05 - 381ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0352 - val_loss: 0.1756 - lr: 1.0000e-05 - 411ms/epoch - 9ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0312 - val_loss: 0.1758 - lr: 1.0000e-05 - 372ms/epoch - 8ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0327 - val_loss: 0.1758 - lr: 1.0000e-05 - 398ms/epoch - 8ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0343 - val_loss: 0.1750 - lr: 1.0000e-05 - 421ms/epoch - 9ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.02870 48/48 - 0s - loss: 0.0307 - val_loss: 0.1742 - lr: 1.0000e-05 - 416ms/epoch - 9ms/step Epoch 00055: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.79 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.47 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15952.568, Time=15.15 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=7.81 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16628.634, Time=10.62 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16462.206, Time=24.29 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16848.298, Time=12.82 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.023, Time=6.57 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.619, Time=3.78 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=7.45 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=18.63 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.994, Time=3.86 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.667, Time=5.19 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 125.475 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 16:13:56 BIC -16911.966
Sample: 0 HQIC -17010.204
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.316e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x2 -2.309e-10 6.24e-05 -3.7e-06 1.000 -0.000 0.000
x3 -2.325e-10 6.26e-05 -3.71e-06 1.000 -0.000 0.000
x4 1.0000 6.25e-05 1.6e+04 0.000 1.000 1.000
x5 -2.107e-10 5.96e-05 -3.54e-06 1.000 -0.000 0.000
x6 -7.997e-10 0.000 -7.41e-06 1.000 -0.000 0.000
x7 -2.295e-10 6.22e-05 -3.69e-06 1.000 -0.000 0.000
x8 -2.246e-10 6.15e-05 -3.65e-06 1.000 -0.000 0.000
x9 -1.167e-11 1.25e-05 -9.33e-07 1.000 -2.45e-05 2.45e-05
x10 -4.454e-11 2.66e-05 -1.68e-06 1.000 -5.21e-05 5.21e-05
x11 -2.221e-10 6.11e-05 -3.63e-06 1.000 -0.000 0.000
x12 -2.266e-10 6.18e-05 -3.66e-06 1.000 -0.000 0.000
x13 -2.315e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x14 -1.767e-09 0.000 -1.02e-05 1.000 -0.000 0.000
x15 -2.11e-10 5.93e-05 -3.56e-06 1.000 -0.000 0.000
x16 -5.283e-10 9.45e-05 -5.59e-06 1.000 -0.000 0.000
x17 -2.098e-10 6.01e-05 -3.49e-06 1.000 -0.000 0.000
x18 -3.82e-11 2.41e-05 -1.58e-06 1.000 -4.73e-05 4.73e-05
x19 -2.645e-10 6.61e-05 -4e-06 1.000 -0.000 0.000
x20 -2.417e-10 6.21e-05 -3.89e-06 1.000 -0.000 0.000
x21 -4.824e-10 8.83e-05 -5.46e-06 1.000 -0.000 0.000
x22 -3.758e-13 1.19e-11 -0.032 0.975 -2.36e-11 2.29e-11
x23 -1.089e-11 8.42e-11 -0.129 0.897 -1.76e-10 1.54e-10
x24 -2.538e-09 0.000 -1.44e-05 1.000 -0.000 0.000
x25 -2.038e-09 0.000 -1.49e-05 1.000 -0.000 0.000
x26 -3.16e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x27 -2.955e-09 0.000 -1.32e-05 1.000 -0.000 0.000
x28 -1.664e-09 0.000 -9.94e-06 1.000 -0.000 0.000
x29 -1.568e-09 0.000 -9.63e-06 1.000 -0.000 0.000
ar.L1 -0.4923 6.2e-10 -7.94e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 3.6e-10 -5.35e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.71e-10 -2.71e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.41e-09 -5.04e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 51.79 Jarque-Bera (JB): 4012066.18
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.44
Prob(H) (two-sided): 0.00 Kurtosis: 348.68
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
WARNING:tensorflow:Layer lstm_58 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_58 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.35754, saving model to LSTM5.h5 16/16 - 2s - loss: 0.7057 - val_loss: 0.3575 - lr: 0.0010 - 2s/epoch - 95ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.35754 to 0.02747, saving model to LSTM5.h5 16/16 - 0s - loss: 0.2192 - val_loss: 0.0275 - lr: 0.0010 - 160ms/epoch - 10ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.02747 16/16 - 0s - loss: 0.1892 - val_loss: 0.0299 - lr: 0.0010 - 166ms/epoch - 10ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.02747 16/16 - 0s - loss: 0.0957 - val_loss: 0.0364 - lr: 0.0010 - 150ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.02747 to 0.01373, saving model to LSTM5.h5 16/16 - 0s - loss: 0.0562 - val_loss: 0.0137 - lr: 0.0010 - 162ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01373 16/16 - 0s - loss: 0.0545 - val_loss: 0.0143 - lr: 0.0010 - 146ms/epoch - 9ms/step Epoch 7/500 Epoch 00007: val_loss improved from 0.01373 to 0.01201, saving model to LSTM5.h5 16/16 - 0s - loss: 0.0451 - val_loss: 0.0120 - lr: 0.0010 - 177ms/epoch - 11ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01201 16/16 - 0s - loss: 0.0470 - val_loss: 0.0320 - lr: 0.0010 - 147ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss improved from 0.01201 to 0.00908, saving model to LSTM5.h5 16/16 - 0s - loss: 0.0439 - val_loss: 0.0091 - lr: 0.0010 - 173ms/epoch - 11ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0421 - val_loss: 0.0262 - lr: 0.0010 - 143ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0416 - val_loss: 0.0110 - lr: 0.0010 - 149ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0365 - val_loss: 0.0158 - lr: 0.0010 - 149ms/epoch - 9ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0391 - val_loss: 0.0105 - lr: 0.0010 - 148ms/epoch - 9ms/step Epoch 14/500 Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00014: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0346 - val_loss: 0.0095 - lr: 0.0010 - 143ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0378 - val_loss: 0.0097 - lr: 1.0000e-04 - 146ms/epoch - 9ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0320 - val_loss: 0.0102 - lr: 1.0000e-04 - 145ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0342 - val_loss: 0.0102 - lr: 1.0000e-04 - 157ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0342 - val_loss: 0.0110 - lr: 1.0000e-04 - 138ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00019: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0345 - val_loss: 0.0105 - lr: 1.0000e-04 - 143ms/epoch - 9ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0313 - val_loss: 0.0105 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0304 - val_loss: 0.0105 - lr: 1.0000e-05 - 152ms/epoch - 10ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0321 - val_loss: 0.0107 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0328 - val_loss: 0.0107 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00024: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0327 - val_loss: 0.0107 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0313 - val_loss: 0.0107 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0352 - val_loss: 0.0107 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0320 - val_loss: 0.0107 - lr: 1.0000e-05 - 140ms/epoch - 9ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0335 - val_loss: 0.0107 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0324 - val_loss: 0.0106 - lr: 1.0000e-05 - 143ms/epoch - 9ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0315 - val_loss: 0.0106 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0324 - val_loss: 0.0106 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0337 - val_loss: 0.0106 - lr: 1.0000e-05 - 141ms/epoch - 9ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0308 - val_loss: 0.0104 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0324 - val_loss: 0.0104 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0318 - val_loss: 0.0104 - lr: 1.0000e-05 - 135ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0299 - val_loss: 0.0104 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0321 - val_loss: 0.0104 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0326 - val_loss: 0.0105 - lr: 1.0000e-05 - 154ms/epoch - 10ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0305 - val_loss: 0.0105 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0315 - val_loss: 0.0104 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0351 - val_loss: 0.0105 - lr: 1.0000e-05 - 156ms/epoch - 10ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0330 - val_loss: 0.0106 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0320 - val_loss: 0.0105 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0327 - val_loss: 0.0104 - lr: 1.0000e-05 - 153ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0304 - val_loss: 0.0105 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0328 - val_loss: 0.0106 - lr: 1.0000e-05 - 186ms/epoch - 12ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0320 - val_loss: 0.0106 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0289 - val_loss: 0.0106 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0319 - val_loss: 0.0106 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0338 - val_loss: 0.0105 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0309 - val_loss: 0.0106 - lr: 1.0000e-05 - 152ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0331 - val_loss: 0.0105 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0334 - val_loss: 0.0106 - lr: 1.0000e-05 - 146ms/epoch - 9ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0326 - val_loss: 0.0105 - lr: 1.0000e-05 - 137ms/epoch - 9ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0310 - val_loss: 0.0104 - lr: 1.0000e-05 - 142ms/epoch - 9ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0339 - val_loss: 0.0104 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0344 - val_loss: 0.0103 - lr: 1.0000e-05 - 161ms/epoch - 10ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0280 - val_loss: 0.0103 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00908 16/16 - 0s - loss: 0.0306 - val_loss: 0.0103 - lr: 1.0000e-05 - 182ms/epoch - 11ms/step Epoch 00059: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 73.04930062485933
RMSE: 8.546888359213506
MAPE: 6.613879572809731
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.41 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.42 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14597.576, Time=5.58 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=8.18 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15338.693, Time=11.30 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15153.472, Time=27.61 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17112.658, Time=15.58 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.587, Time=10.56 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15106.216, Time=13.87 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-12251.715, Time=33.99 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 135.507 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8588.329
Date: Sun, 12 Dec 2021 AIC -17112.658
Time: 16:24:58 BIC -16962.551
Sample: 0 HQIC -17055.011
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.53e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x2 -4.512e-09 3.25e-06 -0.001 0.999 -6.38e-06 6.37e-06
x3 -4.538e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x4 1.0000 3.26e-06 3.07e+05 0.000 1.000 1.000
x5 -4.105e-09 3.11e-06 -0.001 0.999 -6.1e-06 6.09e-06
x6 -1.488e-08 5.45e-06 -0.003 0.998 -1.07e-05 1.07e-05
x7 -4.481e-09 3.24e-06 -0.001 0.999 -6.36e-06 6.36e-06
x8 -4.365e-09 3.2e-06 -0.001 0.999 -6.29e-06 6.28e-06
x9 -4.628e-10 8.38e-07 -0.001 1.000 -1.64e-06 1.64e-06
x10 -7.326e-10 1.3e-06 -0.001 1.000 -2.55e-06 2.54e-06
x11 -4.347e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x12 -4.345e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x13 -4.52e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x14 -3.586e-08 9e-06 -0.004 0.997 -1.77e-05 1.76e-05
x15 -3.757e-09 2.98e-06 -0.001 0.999 -5.84e-06 5.83e-06
x16 -1.24e-08 5.36e-06 -0.002 0.998 -1.05e-05 1.05e-05
x17 -4.515e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x18 -2.632e-10 7.07e-07 -0.000 1.000 -1.39e-06 1.39e-06
x19 -4.642e-09 3.3e-06 -0.001 0.999 -6.47e-06 6.46e-06
x20 -3.919e-10 6.91e-07 -0.001 1.000 -1.36e-06 1.35e-06
x21 -7.69e-09 4.13e-06 -0.002 0.999 -8.11e-06 8.09e-06
x22 -6.998e-12 2.69e-13 -25.970 0.000 -7.53e-12 -6.47e-12
x23 -1.81e-10 2.22e-12 -81.582 0.000 -1.85e-10 -1.77e-10
x24 -4.955e-08 8.9e-06 -0.006 0.996 -1.75e-05 1.74e-05
x25 -4.901e-08 8.4e-06 -0.006 0.995 -1.65e-05 1.64e-05
x26 -6.446e-08 1.2e-05 -0.005 0.996 -2.37e-05 2.35e-05
x27 -5.73e-08 1.14e-05 -0.005 0.996 -2.24e-05 2.23e-05
x28 -2.997e-08 8.22e-06 -0.004 0.997 -1.61e-05 1.61e-05
x29 -3.486e-08 8.89e-06 -0.004 0.997 -1.75e-05 1.74e-05
ma.L1 -1.3902 3.62e-10 -3.84e+09 0.000 -1.390 -1.390
ma.L2 0.4033 3.72e-10 1.08e+09 0.000 0.403 0.403
sigma2 8.541e-11 6.95e-11 1.229 0.219 -5.08e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 66.92 Jarque-Bera (JB): 6039240.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.14
Prob(H) (two-sided): 0.00 Kurtosis: 426.63
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.94e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_59 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_59 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.26413, saving model to LSTM5.h5 17/17 - 2s - loss: 0.3187 - val_loss: 0.2641 - lr: 0.0010 - 2s/epoch - 113ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.26413 to 0.12648, saving model to LSTM5.h5 17/17 - 0s - loss: 0.1236 - val_loss: 0.1265 - lr: 0.0010 - 178ms/epoch - 10ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.12648 17/17 - 0s - loss: 0.0648 - val_loss: 0.4408 - lr: 0.0010 - 159ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.12648 17/17 - 0s - loss: 0.0783 - val_loss: 0.2928 - lr: 0.0010 - 165ms/epoch - 10ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.12648 17/17 - 0s - loss: 0.1373 - val_loss: 0.1590 - lr: 0.0010 - 154ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: val_loss improved from 0.12648 to 0.11518, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0888 - val_loss: 0.1152 - lr: 0.0010 - 187ms/epoch - 11ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.11518 17/17 - 0s - loss: 0.0925 - val_loss: 0.1173 - lr: 0.0010 - 173ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: val_loss improved from 0.11518 to 0.11176, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0441 - val_loss: 0.1118 - lr: 0.0010 - 174ms/epoch - 10ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.11176 17/17 - 0s - loss: 0.0418 - val_loss: 0.1480 - lr: 0.0010 - 168ms/epoch - 10ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.11176 17/17 - 0s - loss: 0.0372 - val_loss: 0.1331 - lr: 0.0010 - 161ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.11176 to 0.10336, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0379 - val_loss: 0.1034 - lr: 0.0010 - 179ms/epoch - 11ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.10336 17/17 - 0s - loss: 0.0350 - val_loss: 0.1238 - lr: 0.0010 - 151ms/epoch - 9ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.10336 17/17 - 0s - loss: 0.0343 - val_loss: 0.1104 - lr: 0.0010 - 167ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss improved from 0.10336 to 0.09685, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0289 - val_loss: 0.0969 - lr: 0.0010 - 185ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.09685 17/17 - 0s - loss: 0.0326 - val_loss: 0.1028 - lr: 0.0010 - 204ms/epoch - 12ms/step Epoch 16/500 Epoch 00016: val_loss improved from 0.09685 to 0.08657, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0333 - val_loss: 0.0866 - lr: 0.0010 - 169ms/epoch - 10ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.08657 17/17 - 0s - loss: 0.0408 - val_loss: 0.1019 - lr: 0.0010 - 156ms/epoch - 9ms/step Epoch 18/500 Epoch 00018: val_loss improved from 0.08657 to 0.03994, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0359 - val_loss: 0.0399 - lr: 0.0010 - 185ms/epoch - 11ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.03994 17/17 - 0s - loss: 0.0352 - val_loss: 0.1347 - lr: 0.0010 - 173ms/epoch - 10ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.03994 17/17 - 0s - loss: 0.0370 - val_loss: 0.0539 - lr: 0.0010 - 158ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.03994 17/17 - 0s - loss: 0.0348 - val_loss: 0.0440 - lr: 0.0010 - 153ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.03994 17/17 - 0s - loss: 0.0326 - val_loss: 0.0673 - lr: 0.0010 - 158ms/epoch - 9ms/step Epoch 23/500 Epoch 00023: val_loss improved from 0.03994 to 0.03396, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0371 - val_loss: 0.0340 - lr: 0.0010 - 168ms/epoch - 10ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.03396 17/17 - 0s - loss: 0.0278 - val_loss: 0.1082 - lr: 0.0010 - 171ms/epoch - 10ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.03396 17/17 - 0s - loss: 0.0336 - val_loss: 0.0475 - lr: 0.0010 - 163ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.03396 17/17 - 0s - loss: 0.0288 - val_loss: 0.0567 - lr: 0.0010 - 159ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.03396 17/17 - 0s - loss: 0.0368 - val_loss: 0.0403 - lr: 0.0010 - 177ms/epoch - 10ms/step Epoch 28/500 Epoch 00028: val_loss improved from 0.03396 to 0.03342, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0256 - val_loss: 0.0334 - lr: 0.0010 - 176ms/epoch - 10ms/step Epoch 29/500 Epoch 00029: val_loss improved from 0.03342 to 0.03152, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0273 - val_loss: 0.0315 - lr: 0.0010 - 171ms/epoch - 10ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.03152 17/17 - 0s - loss: 0.0231 - val_loss: 0.0574 - lr: 0.0010 - 173ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.03152 17/17 - 0s - loss: 0.0227 - val_loss: 0.0366 - lr: 0.0010 - 151ms/epoch - 9ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.03152 17/17 - 0s - loss: 0.0227 - val_loss: 0.0813 - lr: 0.0010 - 181ms/epoch - 11ms/step Epoch 33/500 Epoch 00033: val_loss improved from 0.03152 to 0.01575, saving model to LSTM5.h5 17/17 - 0s - loss: 0.0217 - val_loss: 0.0157 - lr: 0.0010 - 188ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0255 - val_loss: 0.1116 - lr: 0.0010 - 171ms/epoch - 10ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0231 - val_loss: 0.0222 - lr: 0.0010 - 160ms/epoch - 9ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0224 - val_loss: 0.0222 - lr: 0.0010 - 172ms/epoch - 10ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0257 - val_loss: 0.0214 - lr: 0.0010 - 159ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00038: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0213 - val_loss: 0.0353 - lr: 0.0010 - 174ms/epoch - 10ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0206 - val_loss: 0.0310 - lr: 1.0000e-04 - 151ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0205 - val_loss: 0.0274 - lr: 1.0000e-04 - 172ms/epoch - 10ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0225 - val_loss: 0.0250 - lr: 1.0000e-04 - 155ms/epoch - 9ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0197 - val_loss: 0.0229 - lr: 1.0000e-04 - 174ms/epoch - 10ms/step Epoch 43/500 Epoch 00043: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00043: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0202 - val_loss: 0.0224 - lr: 1.0000e-04 - 171ms/epoch - 10ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0192 - val_loss: 0.0221 - lr: 1.0000e-05 - 181ms/epoch - 11ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0212 - val_loss: 0.0218 - lr: 1.0000e-05 - 152ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0190 - val_loss: 0.0217 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0214 - val_loss: 0.0216 - lr: 1.0000e-05 - 150ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00048: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0214 - val_loss: 0.0212 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0183 - val_loss: 0.0210 - lr: 1.0000e-05 - 196ms/epoch - 12ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0201 - val_loss: 0.0207 - lr: 1.0000e-05 - 195ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0210 - val_loss: 0.0207 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0207 - val_loss: 0.0206 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0205 - val_loss: 0.0204 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0197 - val_loss: 0.0201 - lr: 1.0000e-05 - 159ms/epoch - 9ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0202 - val_loss: 0.0197 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0206 - val_loss: 0.0196 - lr: 1.0000e-05 - 163ms/epoch - 10ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0201 - val_loss: 0.0196 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0194 - val_loss: 0.0197 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0210 - val_loss: 0.0198 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0195 - val_loss: 0.0196 - lr: 1.0000e-05 - 161ms/epoch - 9ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0200 - val_loss: 0.0194 - lr: 1.0000e-05 - 175ms/epoch - 10ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0206 - val_loss: 0.0196 - lr: 1.0000e-05 - 143ms/epoch - 8ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0194 - val_loss: 0.0193 - lr: 1.0000e-05 - 160ms/epoch - 9ms/step Epoch 64/500 Epoch 00064: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0202 - val_loss: 0.0191 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0212 - val_loss: 0.0190 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step Epoch 66/500 Epoch 00066: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0204 - val_loss: 0.0189 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step Epoch 67/500 Epoch 00067: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0199 - val_loss: 0.0188 - lr: 1.0000e-05 - 217ms/epoch - 13ms/step Epoch 68/500 Epoch 00068: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0218 - val_loss: 0.0189 - lr: 1.0000e-05 - 170ms/epoch - 10ms/step Epoch 69/500 Epoch 00069: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0217 - val_loss: 0.0187 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step Epoch 70/500 Epoch 00070: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0209 - val_loss: 0.0187 - lr: 1.0000e-05 - 162ms/epoch - 10ms/step Epoch 71/500 Epoch 00071: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0207 - val_loss: 0.0186 - lr: 1.0000e-05 - 168ms/epoch - 10ms/step Epoch 72/500 Epoch 00072: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0211 - val_loss: 0.0188 - lr: 1.0000e-05 - 186ms/epoch - 11ms/step Epoch 73/500 Epoch 00073: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0190 - val_loss: 0.0189 - lr: 1.0000e-05 - 166ms/epoch - 10ms/step Epoch 74/500 Epoch 00074: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0190 - val_loss: 0.0186 - lr: 1.0000e-05 - 147ms/epoch - 9ms/step Epoch 75/500 Epoch 00075: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0203 - val_loss: 0.0186 - lr: 1.0000e-05 - 153ms/epoch - 9ms/step Epoch 76/500 Epoch 00076: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0207 - val_loss: 0.0186 - lr: 1.0000e-05 - 154ms/epoch - 9ms/step Epoch 77/500 Epoch 00077: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0213 - val_loss: 0.0185 - lr: 1.0000e-05 - 151ms/epoch - 9ms/step Epoch 78/500 Epoch 00078: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0209 - val_loss: 0.0184 - lr: 1.0000e-05 - 182ms/epoch - 11ms/step Epoch 79/500 Epoch 00079: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0204 - val_loss: 0.0184 - lr: 1.0000e-05 - 178ms/epoch - 10ms/step Epoch 80/500 Epoch 00080: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0193 - val_loss: 0.0184 - lr: 1.0000e-05 - 177ms/epoch - 10ms/step Epoch 81/500 Epoch 00081: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0206 - val_loss: 0.0184 - lr: 1.0000e-05 - 148ms/epoch - 9ms/step Epoch 82/500 Epoch 00082: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0200 - val_loss: 0.0185 - lr: 1.0000e-05 - 155ms/epoch - 9ms/step Epoch 83/500 Epoch 00083: val_loss did not improve from 0.01575 17/17 - 0s - loss: 0.0199 - val_loss: 0.0185 - lr: 1.0000e-05 - 178ms/epoch - 10ms/step Epoch 00083: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 73.04930062485933
RMSE: 8.546888359213506
MAPE: 6.613879572809731
WMA
Prediction vs Close: 55.97% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 70.35376938042184
RMSE: 8.387715385039114
MAPE: 6.8547592718484545
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.776, Time=3.27 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.586, Time=5.42 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16271.755, Time=7.27 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.586, Time=8.17 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15152.908, Time=11.03 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14481.105, Time=12.78 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16088.109, Time=22.11 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.021, Time=6.67 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.615, Time=4.21 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=8.04 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=19.68 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.981, Time=4.31 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.666, Time=4.70 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 117.691 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 16:31:21 BIC -16911.965
Sample: 0 HQIC -17010.203
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 6.02e-05 -4.65e-06 1.000 -0.000 0.000
x2 -2.817e-10 6.04e-05 -4.66e-06 1.000 -0.000 0.000
x3 -2.805e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x4 1.0000 6.03e-05 1.66e+04 0.000 1.000 1.000
x5 -2.6e-10 5.8e-05 -4.48e-06 1.000 -0.000 0.000
x6 -1.389e-09 0.000 -1.08e-05 1.000 -0.000 0.000
x7 -2.789e-10 6.01e-05 -4.64e-06 1.000 -0.000 0.000
x8 -2.763e-10 5.99e-05 -4.62e-06 1.000 -0.000 0.000
x9 -2.224e-12 1.6e-06 -1.39e-06 1.000 -3.13e-06 3.13e-06
x10 -1.345e-10 4.12e-05 -3.26e-06 1.000 -8.08e-05 8.08e-05
x11 -2.9e-10 6.12e-05 -4.74e-06 1.000 -0.000 0.000
x12 -2.602e-10 5.82e-05 -4.47e-06 1.000 -0.000 0.000
x13 -2.807e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x14 -1.87e-09 0.000 -1.2e-05 1.000 -0.000 0.000
x15 -2.844e-10 6.05e-05 -4.7e-06 1.000 -0.000 0.000
x16 -7.962e-11 3.2e-05 -2.48e-06 1.000 -6.28e-05 6.28e-05
x17 -2.445e-10 5.61e-05 -4.36e-06 1.000 -0.000 0.000
x18 -6.4e-10 9.15e-05 -6.99e-06 1.000 -0.000 0.000
x19 -2.923e-10 6.14e-05 -4.76e-06 1.000 -0.000 0.000
x20 -4.336e-10 7.41e-05 -5.86e-06 1.000 -0.000 0.000
x21 -4.55e-10 7.5e-05 -6.07e-06 1.000 -0.000 0.000
x22 -3.587e-13 1.42e-11 -0.025 0.980 -2.82e-11 2.75e-11
x23 -1.088e-11 9.56e-11 -0.114 0.909 -1.98e-10 1.76e-10
x24 -2.146e-09 0.000 -1.63e-05 1.000 -0.000 0.000
x25 -1.637e-09 0.000 -1.35e-05 1.000 -0.000 0.000
x26 -3.147e-09 0.000 -1.56e-05 1.000 -0.000 0.000
x27 -2.58e-09 0.000 -1.41e-05 1.000 -0.000 0.000
x28 -2.444e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x29 -1.666e-09 0.000 -1.13e-05 1.000 -0.000 0.000
ar.L1 -0.4923 5.1e-10 -9.65e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 2.96e-10 -6.49e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.4e-10 -3.29e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.16e-09 -6.12e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.06 Jarque-Bera (JB): 4126495.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.48
Prob(H) (two-sided): 0.00 Kurtosis: 353.58
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
WARNING:tensorflow:Layer lstm_60 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_60 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.08624, saving model to LSTM5.h5 10/10 - 2s - loss: 0.8398 - val_loss: 0.0862 - lr: 0.0010 - 2s/epoch - 155ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.08624 to 0.05008, saving model to LSTM5.h5 10/10 - 0s - loss: 0.3040 - val_loss: 0.0501 - lr: 0.0010 - 127ms/epoch - 13ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.05008 to 0.02501, saving model to LSTM5.h5 10/10 - 0s - loss: 0.0974 - val_loss: 0.0250 - lr: 0.0010 - 125ms/epoch - 12ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0654 - val_loss: 0.0903 - lr: 0.0010 - 112ms/epoch - 11ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0475 - val_loss: 0.0980 - lr: 0.0010 - 112ms/epoch - 11ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0445 - val_loss: 0.0713 - lr: 0.0010 - 119ms/epoch - 12ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0459 - val_loss: 0.0613 - lr: 0.0010 - 102ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00008: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0381 - val_loss: 0.1080 - lr: 0.0010 - 107ms/epoch - 11ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0400 - val_loss: 0.1042 - lr: 1.0000e-04 - 99ms/epoch - 10ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0400 - val_loss: 0.0992 - lr: 1.0000e-04 - 106ms/epoch - 11ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0366 - val_loss: 0.0930 - lr: 1.0000e-04 - 92ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0426 - val_loss: 0.0876 - lr: 1.0000e-04 - 101ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00013: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0376 - val_loss: 0.0844 - lr: 1.0000e-04 - 101ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0377 - val_loss: 0.0841 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0362 - val_loss: 0.0837 - lr: 1.0000e-05 - 129ms/epoch - 13ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0406 - val_loss: 0.0834 - lr: 1.0000e-05 - 124ms/epoch - 12ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0393 - val_loss: 0.0833 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step Epoch 18/500 Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00018: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0387 - val_loss: 0.0831 - lr: 1.0000e-05 - 116ms/epoch - 12ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0382 - val_loss: 0.0826 - lr: 1.0000e-05 - 133ms/epoch - 13ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0374 - val_loss: 0.0820 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0389 - val_loss: 0.0812 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0377 - val_loss: 0.0809 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0353 - val_loss: 0.0814 - lr: 1.0000e-05 - 110ms/epoch - 11ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0379 - val_loss: 0.0813 - lr: 1.0000e-05 - 113ms/epoch - 11ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0358 - val_loss: 0.0810 - lr: 1.0000e-05 - 120ms/epoch - 12ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0346 - val_loss: 0.0804 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0378 - val_loss: 0.0799 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0367 - val_loss: 0.0796 - lr: 1.0000e-05 - 119ms/epoch - 12ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0370 - val_loss: 0.0791 - lr: 1.0000e-05 - 121ms/epoch - 12ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0363 - val_loss: 0.0788 - lr: 1.0000e-05 - 102ms/epoch - 10ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0387 - val_loss: 0.0786 - lr: 1.0000e-05 - 108ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0411 - val_loss: 0.0779 - lr: 1.0000e-05 - 116ms/epoch - 12ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0368 - val_loss: 0.0773 - lr: 1.0000e-05 - 118ms/epoch - 12ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0357 - val_loss: 0.0770 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0377 - val_loss: 0.0766 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0388 - val_loss: 0.0765 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0375 - val_loss: 0.0766 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0378 - val_loss: 0.0769 - lr: 1.0000e-05 - 107ms/epoch - 11ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0361 - val_loss: 0.0769 - lr: 1.0000e-05 - 101ms/epoch - 10ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0362 - val_loss: 0.0769 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0367 - val_loss: 0.0770 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0366 - val_loss: 0.0768 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0347 - val_loss: 0.0767 - lr: 1.0000e-05 - 104ms/epoch - 10ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0386 - val_loss: 0.0772 - lr: 1.0000e-05 - 97ms/epoch - 10ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0355 - val_loss: 0.0774 - lr: 1.0000e-05 - 105ms/epoch - 11ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0377 - val_loss: 0.0772 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0392 - val_loss: 0.0771 - lr: 1.0000e-05 - 106ms/epoch - 11ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0348 - val_loss: 0.0769 - lr: 1.0000e-05 - 99ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0383 - val_loss: 0.0762 - lr: 1.0000e-05 - 111ms/epoch - 11ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0378 - val_loss: 0.0759 - lr: 1.0000e-05 - 114ms/epoch - 11ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0369 - val_loss: 0.0756 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0373 - val_loss: 0.0753 - lr: 1.0000e-05 - 96ms/epoch - 10ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.02501 10/10 - 0s - loss: 0.0379 - val_loss: 0.0750 - lr: 1.0000e-05 - 109ms/epoch - 11ms/step Epoch 00053: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 73.04930062485933
RMSE: 8.546888359213506
MAPE: 6.613879572809731
WMA
Prediction vs Close: 55.97% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 70.35376938042184
RMSE: 8.387715385039114
MAPE: 6.8547592718484545
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 70.24761196199488
RMSE: 8.381384847505505
MAPE: 6.862692730259403
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.104, Time=3.80 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.591, Time=5.35 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16779.655, Time=10.90 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.590, Time=7.94 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16989.430, Time=3.84 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16990.286, Time=3.82 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.543, Time=3.81 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-16987.154, Time=4.18 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-16533.935, Time=16.39 sec
Best model: ARIMA(2,3,0)(0,0,0)[0]
Total fit time: 60.054 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(2, 3, 0) Log Likelihood 8527.143
Date: Sun, 12 Dec 2021 AIC -16990.286
Time: 16:41:16 BIC -16840.179
Sample: 0 HQIC -16932.639
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.1e-16 nan nan nan nan nan
x2 -3.811e-16 -0 inf 0.000 -3.81e-16 -3.81e-16
x3 8.776e-16 4.38e-27 2e+11 0.000 8.78e-16 8.78e-16
x4 1.0000 4.36e-27 2.29e+26 0.000 1.000 1.000
x5 6.686e-16 4.14e-27 1.61e+11 0.000 6.69e-16 6.69e-16
x6 -5.238e-17 9.44e-27 -5.55e+09 0.000 -5.24e-17 -5.24e-17
x7 -1.709e-16 4.37e-27 -3.91e+10 0.000 -1.71e-16 -1.71e-16
x8 1.439e-15 4.33e-27 3.32e+11 0.000 1.44e-15 1.44e-15
x9 -2.924e-16 5.73e-28 -5.1e+11 0.000 -2.92e-16 -2.92e-16
x10 -1.028e-16 1.78e-27 -5.76e+10 0.000 -1.03e-16 -1.03e-16
x11 -4.338e-16 4.31e-27 -1.01e+11 0.000 -4.34e-16 -4.34e-16
x12 1.72e-16 4.33e-27 3.97e+10 0.000 1.72e-16 1.72e-16
x13 -3.011e-16 4.36e-27 -6.91e+10 0.000 -3.01e-16 -3.01e-16
x14 -2.611e-16 1.27e-26 -2.06e+10 0.000 -2.61e-16 -2.61e-16
x15 1.53e-14 4.46e-27 3.43e+12 0.000 1.53e-14 1.53e-14
x16 -1.401e-14 5.45e-27 -2.57e+12 0.000 -1.4e-14 -1.4e-14
x17 2.316e-14 4.12e-27 5.62e+12 0.000 2.32e-14 2.32e-14
x18 -3.727e-15 3.71e-27 -1.01e+12 0.000 -3.73e-15 -3.73e-15
x19 -1.361e-14 4.94e-27 -2.75e+12 0.000 -1.36e-14 -1.36e-14
x20 -5.277e-15 6.08e-27 -8.68e+11 0.000 -5.28e-15 -5.28e-15
x21 1.178e-18 3.12e-27 3.77e+08 0.000 1.18e-18 1.18e-18
x22 -8.779e-17 1.74e-29 -5.05e+12 0.000 -8.78e-17 -8.78e-17
x23 3.183e-17 5.91e-29 5.39e+11 0.000 3.18e-17 3.18e-17
x24 -1.683e-16 1.41e-26 -1.19e+10 0.000 -1.68e-16 -1.68e-16
x25 8.988e-17 1.48e-30 6.08e+13 0.000 8.99e-17 8.99e-17
x26 4.435e-17 1.58e-26 2.8e+09 0.000 4.44e-17 4.44e-17
x27 1.538e-16 8.87e-27 1.73e+10 0.000 1.54e-16 1.54e-16
x28 1.635e-16 1.22e-26 1.34e+10 0.000 1.63e-16 1.63e-16
x29 1.474e-16 6.34e-27 2.33e+10 0.000 1.47e-16 1.47e-16
ar.L1 -0.9879 1.21e-22 -8.16e+21 0.000 -0.988 -0.988
ar.L2 -0.4879 1.29e-22 -3.79e+21 0.000 -0.488 -0.488
sigma2 1e-10 6.99e-11 1.432 0.152 -3.69e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 57.29 Jarque-Bera (JB): 559955.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.13 Skew: 0.64
Prob(H) (two-sided): 0.00 Kurtosis: 132.20
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number inf. Standard errors may be unstable.
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/mlemodel.py:2968: RuntimeWarning: divide by zero encountered in true_divide return self.params / self.bse
ARIMA order: (2, 3, 0) WARNING:tensorflow:Layer lstm_61 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_61 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.03822, saving model to LSTM5.h5 45/45 - 2s - loss: 0.2308 - val_loss: 0.0382 - lr: 0.0010 - 2s/epoch - 39ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.1602 - val_loss: 0.5377 - lr: 0.0010 - 412ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.1071 - val_loss: 0.4634 - lr: 0.0010 - 405ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0711 - val_loss: 0.2545 - lr: 0.0010 - 390ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0477 - val_loss: 0.0663 - lr: 0.0010 - 419ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0372 - val_loss: 0.0460 - lr: 0.0010 - 353ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0389 - val_loss: 0.0517 - lr: 1.0000e-04 - 408ms/epoch - 9ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0437 - val_loss: 0.0554 - lr: 1.0000e-04 - 395ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0402 - val_loss: 0.0707 - lr: 1.0000e-04 - 402ms/epoch - 9ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0443 - val_loss: 0.0625 - lr: 1.0000e-04 - 363ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0361 - val_loss: 0.0628 - lr: 1.0000e-04 - 353ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0376 - val_loss: 0.0630 - lr: 1.0000e-05 - 410ms/epoch - 9ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0385 - val_loss: 0.0642 - lr: 1.0000e-05 - 414ms/epoch - 9ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0359 - val_loss: 0.0655 - lr: 1.0000e-05 - 404ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0332 - val_loss: 0.0661 - lr: 1.0000e-05 - 420ms/epoch - 9ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0391 - val_loss: 0.0670 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0389 - val_loss: 0.0670 - lr: 1.0000e-05 - 394ms/epoch - 9ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0360 - val_loss: 0.0666 - lr: 1.0000e-05 - 428ms/epoch - 10ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0372 - val_loss: 0.0668 - lr: 1.0000e-05 - 351ms/epoch - 8ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0380 - val_loss: 0.0675 - lr: 1.0000e-05 - 414ms/epoch - 9ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0393 - val_loss: 0.0670 - lr: 1.0000e-05 - 422ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0354 - val_loss: 0.0660 - lr: 1.0000e-05 - 428ms/epoch - 10ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0377 - val_loss: 0.0661 - lr: 1.0000e-05 - 429ms/epoch - 10ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0382 - val_loss: 0.0663 - lr: 1.0000e-05 - 398ms/epoch - 9ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0375 - val_loss: 0.0664 - lr: 1.0000e-05 - 384ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0339 - val_loss: 0.0672 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0364 - val_loss: 0.0669 - lr: 1.0000e-05 - 411ms/epoch - 9ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0338 - val_loss: 0.0670 - lr: 1.0000e-05 - 353ms/epoch - 8ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0407 - val_loss: 0.0666 - lr: 1.0000e-05 - 377ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0355 - val_loss: 0.0666 - lr: 1.0000e-05 - 485ms/epoch - 11ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0348 - val_loss: 0.0667 - lr: 1.0000e-05 - 434ms/epoch - 10ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0372 - val_loss: 0.0670 - lr: 1.0000e-05 - 429ms/epoch - 10ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0372 - val_loss: 0.0662 - lr: 1.0000e-05 - 368ms/epoch - 8ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0397 - val_loss: 0.0656 - lr: 1.0000e-05 - 379ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0355 - val_loss: 0.0654 - lr: 1.0000e-05 - 437ms/epoch - 10ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0350 - val_loss: 0.0664 - lr: 1.0000e-05 - 383ms/epoch - 9ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0341 - val_loss: 0.0654 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0353 - val_loss: 0.0649 - lr: 1.0000e-05 - 386ms/epoch - 9ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0334 - val_loss: 0.0659 - lr: 1.0000e-05 - 355ms/epoch - 8ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0359 - val_loss: 0.0653 - lr: 1.0000e-05 - 396ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0354 - val_loss: 0.0651 - lr: 1.0000e-05 - 480ms/epoch - 11ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0327 - val_loss: 0.0647 - lr: 1.0000e-05 - 406ms/epoch - 9ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0354 - val_loss: 0.0636 - lr: 1.0000e-05 - 427ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0401 - val_loss: 0.0639 - lr: 1.0000e-05 - 411ms/epoch - 9ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0384 - val_loss: 0.0649 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0341 - val_loss: 0.0650 - lr: 1.0000e-05 - 356ms/epoch - 8ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0341 - val_loss: 0.0638 - lr: 1.0000e-05 - 391ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0346 - val_loss: 0.0627 - lr: 1.0000e-05 - 395ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0369 - val_loss: 0.0622 - lr: 1.0000e-05 - 398ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0346 - val_loss: 0.0634 - lr: 1.0000e-05 - 382ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.03822 45/45 - 0s - loss: 0.0344 - val_loss: 0.0647 - lr: 1.0000e-05 - 384ms/epoch - 9ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 73.04930062485933
RMSE: 8.546888359213506
MAPE: 6.613879572809731
WMA
Prediction vs Close: 55.97% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 70.35376938042184
RMSE: 8.387715385039114
MAPE: 6.8547592718484545
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 70.24761196199488
RMSE: 8.381384847505505
MAPE: 6.862692730259403
KAMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 27.01407660930758
RMSE: 5.1975067685677505
MAPE: 4.263533603346384
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.238, Time=3.59 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.578, Time=5.37 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16746.296, Time=8.30 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.578, Time=8.25 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16987.591, Time=3.67 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16395.520, Time=12.87 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17063.555, Time=12.29 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.578, Time=10.73 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16082.554, Time=20.06 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-15249.608, Time=18.57 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 103.711 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8563.778
Date: Sun, 12 Dec 2021 AIC -17063.555
Time: 16:44:49 BIC -16913.448
Sample: 0 HQIC -17005.908
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.495e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x2 -1.485e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x3 -1.518e-10 0.000 -1.21e-06 1.000 -0.000 0.000
x4 1.0000 0.000 8075.329 0.000 1.000 1.000
x5 -1.356e-10 0.000 -1.15e-06 1.000 -0.000 0.000
x6 -2.861e-09 0.000 -2.38e-05 1.000 -0.000 0.000
x7 -1.374e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x8 -1.371e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x9 -7.133e-11 7.1e-06 -1.01e-05 1.000 -1.39e-05 1.39e-05
x10 -1.23e-10 4.21e-05 -2.92e-06 1.000 -8.24e-05 8.24e-05
x11 -1.357e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x12 -1.401e-10 0.000 -1.11e-06 1.000 -0.000 0.000
x13 -1.436e-10 0.000 -1.16e-06 1.000 -0.000 0.000
x14 -1.179e-09 0.000 -3.22e-06 1.000 -0.001 0.001
x15 -1.651e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x16 -1.064e-10 0.000 -9.62e-07 1.000 -0.000 0.000
x17 -1.041e-10 0.000 -9.53e-07 1.000 -0.000 0.000
x18 -4.477e-10 0.000 -1.99e-06 1.000 -0.000 0.000
x19 -1.816e-10 0.000 -1.26e-06 1.000 -0.000 0.000
x20 -4.37e-10 0.000 -1.96e-06 1.000 -0.000 0.000
x21 -1.371e-09 9.1e-05 -1.51e-05 1.000 -0.000 0.000
x22 -1.059e-11 nan nan nan nan nan
x23 -9.902e-11 3.83e-09 -0.026 0.979 -7.61e-09 7.41e-09
x24 -5.521e-09 0.000 -1.34e-05 1.000 -0.001 0.001
x25 -4.621e-09 6.42e-05 -7.2e-05 1.000 -0.000 0.000
x26 -1.587e-09 0.000 -3.73e-06 1.000 -0.001 0.001
x27 -8.504e-10 0.000 -2.79e-06 1.000 -0.001 0.001
x28 -1.122e-09 0.000 -3.14e-06 1.000 -0.001 0.001
x29 -6.091e-10 0.000 -2.45e-06 1.000 -0.000 0.000
ma.L1 -1.3318 7.32e-07 -1.82e+06 0.000 -1.332 -1.332
ma.L2 0.3767 7.56e-07 4.98e+05 0.000 0.377 0.377
sigma2 9.093e-11 6.97e-11 1.304 0.192 -4.57e-11 2.28e-10
===================================================================================
Ljung-Box (L1) (Q): 76.00 Jarque-Bera (JB): 304933.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 1.65
Prob(H) (two-sided): 0.00 Kurtosis: 98.29
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.19e+28. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_62 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_62 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.18148, saving model to LSTM5.h5 58/58 - 2s - loss: 0.2766 - val_loss: 0.1815 - lr: 0.0010 - 2s/epoch - 33ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.18148 58/58 - 0s - loss: 0.0961 - val_loss: 0.3811 - lr: 0.0010 - 474ms/epoch - 8ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.18148 to 0.16577, saving model to LSTM5.h5 58/58 - 1s - loss: 0.0762 - val_loss: 0.1658 - lr: 0.0010 - 503ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0861 - val_loss: 0.7714 - lr: 0.0010 - 470ms/epoch - 8ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0546 - val_loss: 0.4224 - lr: 0.0010 - 472ms/epoch - 8ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0428 - val_loss: 0.2851 - lr: 0.0010 - 486ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0373 - val_loss: 0.1855 - lr: 0.0010 - 575ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00008: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0384 - val_loss: 0.3007 - lr: 0.0010 - 529ms/epoch - 9ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0439 - val_loss: 0.2758 - lr: 1.0000e-04 - 493ms/epoch - 8ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0342 - val_loss: 0.2544 - lr: 1.0000e-04 - 464ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0303 - val_loss: 0.2392 - lr: 1.0000e-04 - 478ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0298 - val_loss: 0.2257 - lr: 1.0000e-04 - 503ms/epoch - 9ms/step Epoch 13/500 Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00013: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0317 - val_loss: 0.2171 - lr: 1.0000e-04 - 482ms/epoch - 8ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0304 - val_loss: 0.2160 - lr: 1.0000e-05 - 506ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0280 - val_loss: 0.2149 - lr: 1.0000e-05 - 470ms/epoch - 8ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0294 - val_loss: 0.2138 - lr: 1.0000e-05 - 532ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0255 - val_loss: 0.2129 - lr: 1.0000e-05 - 456ms/epoch - 8ms/step Epoch 18/500 Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00018: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0296 - val_loss: 0.2125 - lr: 1.0000e-05 - 532ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0298 - val_loss: 0.2117 - lr: 1.0000e-05 - 473ms/epoch - 8ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0287 - val_loss: 0.2109 - lr: 1.0000e-05 - 617ms/epoch - 11ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0322 - val_loss: 0.2099 - lr: 1.0000e-05 - 515ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0272 - val_loss: 0.2106 - lr: 1.0000e-05 - 456ms/epoch - 8ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0280 - val_loss: 0.2112 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0278 - val_loss: 0.2109 - lr: 1.0000e-05 - 491ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0279 - val_loss: 0.2110 - lr: 1.0000e-05 - 469ms/epoch - 8ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0307 - val_loss: 0.2093 - lr: 1.0000e-05 - 502ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0301 - val_loss: 0.2081 - lr: 1.0000e-05 - 521ms/epoch - 9ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0271 - val_loss: 0.2076 - lr: 1.0000e-05 - 517ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0269 - val_loss: 0.2061 - lr: 1.0000e-05 - 458ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0298 - val_loss: 0.2052 - lr: 1.0000e-05 - 472ms/epoch - 8ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0291 - val_loss: 0.2042 - lr: 1.0000e-05 - 499ms/epoch - 9ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0264 - val_loss: 0.2044 - lr: 1.0000e-05 - 479ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0306 - val_loss: 0.2036 - lr: 1.0000e-05 - 515ms/epoch - 9ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0248 - val_loss: 0.2027 - lr: 1.0000e-05 - 472ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0287 - val_loss: 0.2015 - lr: 1.0000e-05 - 478ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0300 - val_loss: 0.2003 - lr: 1.0000e-05 - 472ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0289 - val_loss: 0.2028 - lr: 1.0000e-05 - 479ms/epoch - 8ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0280 - val_loss: 0.2023 - lr: 1.0000e-05 - 492ms/epoch - 8ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0287 - val_loss: 0.2033 - lr: 1.0000e-05 - 495ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0273 - val_loss: 0.2018 - lr: 1.0000e-05 - 468ms/epoch - 8ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0298 - val_loss: 0.1994 - lr: 1.0000e-05 - 479ms/epoch - 8ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0304 - val_loss: 0.1988 - lr: 1.0000e-05 - 596ms/epoch - 10ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0281 - val_loss: 0.2007 - lr: 1.0000e-05 - 495ms/epoch - 9ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0302 - val_loss: 0.1994 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0288 - val_loss: 0.1970 - lr: 1.0000e-05 - 488ms/epoch - 8ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0285 - val_loss: 0.1948 - lr: 1.0000e-05 - 499ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0280 - val_loss: 0.1967 - lr: 1.0000e-05 - 499ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0305 - val_loss: 0.1980 - lr: 1.0000e-05 - 459ms/epoch - 8ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0289 - val_loss: 0.1977 - lr: 1.0000e-05 - 518ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0245 - val_loss: 0.1980 - lr: 1.0000e-05 - 479ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0267 - val_loss: 0.1977 - lr: 1.0000e-05 - 499ms/epoch - 9ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.16577 58/58 - 0s - loss: 0.0273 - val_loss: 0.1991 - lr: 1.0000e-05 - 483ms/epoch - 8ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.16577 58/58 - 1s - loss: 0.0266 - val_loss: 0.1989 - lr: 1.0000e-05 - 501ms/epoch - 9ms/step Epoch 00053: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 73.04930062485933
RMSE: 8.546888359213506
MAPE: 6.613879572809731
WMA
Prediction vs Close: 55.97% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 70.35376938042184
RMSE: 8.387715385039114
MAPE: 6.8547592718484545
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 70.24761196199488
RMSE: 8.381384847505505
MAPE: 6.862692730259403
KAMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 27.01407660930758
RMSE: 5.1975067685677505
MAPE: 4.263533603346384
MIDPOINT
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 37.16076795716489
RMSE: 6.095963250969029
MAPE: 5.0853544537748006
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16837.838, Time=3.49 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14497.319, Time=3.90 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16084.348, Time=6.59 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.920, Time=11.42 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15304.480, Time=11.34 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15949.053, Time=12.49 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17059.707, Time=11.52 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15313.920, Time=14.44 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16054.952, Time=13.33 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11445.350, Time=35.07 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 123.603 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8561.853
Date: Sun, 12 Dec 2021 AIC -17059.707
Time: 16:51:30 BIC -16909.600
Sample: 0 HQIC -17002.059
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.003e-07 7.69e-05 -0.001 0.999 -0.000 0.000
x2 -1.001e-07 7.44e-05 -0.001 0.999 -0.000 0.000
x3 -1.006e-07 7.84e-05 -0.001 0.999 -0.000 0.000
x4 1.0000 7.11e-05 1.41e+04 0.000 1.000 1.000
x5 -9.611e-08 6.77e-05 -0.001 0.999 -0.000 0.000
x6 -1.249e-07 4.06e-05 -0.003 0.998 -7.96e-05 7.94e-05
x7 -1e-07 7.89e-05 -0.001 0.999 -0.000 0.000
x8 -0.0002 9.43e-05 -1.838 0.066 -0.000 1.15e-05
x9 2.853e-08 9.89e-05 0.000 1.000 -0.000 0.000
x10 -4.022e-05 0.000 -0.200 0.842 -0.000 0.000
x11 0.0003 7e-05 4.122 0.000 0.000 0.000
x12 7.55e-05 0.000 0.633 0.527 -0.000 0.000
x13 -1.005e-07 7.29e-05 -0.001 0.999 -0.000 0.000
x14 -2.756e-07 0.000 -0.001 0.999 -0.000 0.000
x15 -8.419e-08 8.98e-05 -0.001 0.999 -0.000 0.000
x16 -2.171e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.105e-07 9.93e-05 -0.001 0.999 -0.000 0.000
x18 1.263e-07 3.22e-05 0.004 0.997 -6.31e-05 6.33e-05
x19 -8.769e-08 0.000 -0.001 0.999 -0.000 0.000
x20 -5.772e-08 0.000 -0.000 1.000 -0.000 0.000
x21 -9.77e-08 0.000 -0.001 1.000 -0.000 0.000
x22 -3.686e-12 7.09e-07 -5.2e-06 1.000 -1.39e-06 1.39e-06
x23 -9.216e-12 2.4e-05 -3.83e-07 1.000 -4.71e-05 4.71e-05
x24 -3.648e-07 0.000 -0.001 0.999 -0.001 0.001
x25 -1.391e-07 0.001 -0.000 1.000 -0.002 0.002
x26 -3.142e-07 0.000 -0.001 0.999 -0.001 0.001
x27 -3.042e-07 5.47e-05 -0.006 0.996 -0.000 0.000
x28 -1.785e-07 0.000 -0.001 0.999 -0.000 0.000
x29 -1.909e-07 0.000 -0.001 1.000 -0.001 0.001
ma.L1 -1.3901 8.24e-06 -1.69e+05 0.000 -1.390 -1.390
ma.L2 0.4035 2.01e-05 2.01e+04 0.000 0.403 0.404
sigma2 7.538e-11 6.94e-11 1.085 0.278 -6.07e-11 2.11e-10
===================================================================================
Ljung-Box (L1) (Q): 69.36 Jarque-Bera (JB): 6470073.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -12.55
Prob(H) (two-sided): 0.00 Kurtosis: 441.48
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.58e+22. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_63 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_63 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.02752, saving model to LSTM5.h5 43/43 - 2s - loss: 0.4677 - val_loss: 0.0275 - lr: 0.0010 - 2s/epoch - 42ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.02752 43/43 - 0s - loss: 0.1066 - val_loss: 0.0474 - lr: 0.0010 - 402ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.02752 43/43 - 0s - loss: 0.1599 - val_loss: 0.7391 - lr: 0.0010 - 374ms/epoch - 9ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.02752 43/43 - 0s - loss: 0.0688 - val_loss: 0.1002 - lr: 0.0010 - 381ms/epoch - 9ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.02752 to 0.01627, saving model to LSTM5.h5 43/43 - 0s - loss: 0.0612 - val_loss: 0.0163 - lr: 0.0010 - 433ms/epoch - 10ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0435 - val_loss: 0.0873 - lr: 0.0010 - 392ms/epoch - 9ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0430 - val_loss: 0.0233 - lr: 0.0010 - 365ms/epoch - 8ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0386 - val_loss: 0.0546 - lr: 0.0010 - 439ms/epoch - 10ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0380 - val_loss: 0.0199 - lr: 0.0010 - 382ms/epoch - 9ms/step Epoch 10/500 Epoch 00010: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00010: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0337 - val_loss: 0.0558 - lr: 0.0010 - 403ms/epoch - 9ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0481 - val_loss: 0.0449 - lr: 1.0000e-04 - 398ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0410 - val_loss: 0.0391 - lr: 1.0000e-04 - 353ms/epoch - 8ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0356 - val_loss: 0.0360 - lr: 1.0000e-04 - 426ms/epoch - 10ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0327 - val_loss: 0.0314 - lr: 1.0000e-04 - 405ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0284 - val_loss: 0.0296 - lr: 1.0000e-04 - 399ms/epoch - 9ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0340 - val_loss: 0.0291 - lr: 1.0000e-05 - 399ms/epoch - 9ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0287 - val_loss: 0.0288 - lr: 1.0000e-05 - 378ms/epoch - 9ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0312 - val_loss: 0.0289 - lr: 1.0000e-05 - 374ms/epoch - 9ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0345 - val_loss: 0.0288 - lr: 1.0000e-05 - 374ms/epoch - 9ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0266 - val_loss: 0.0283 - lr: 1.0000e-05 - 424ms/epoch - 10ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0332 - val_loss: 0.0279 - lr: 1.0000e-05 - 372ms/epoch - 9ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0318 - val_loss: 0.0279 - lr: 1.0000e-05 - 390ms/epoch - 9ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0301 - val_loss: 0.0275 - lr: 1.0000e-05 - 405ms/epoch - 9ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0284 - val_loss: 0.0274 - lr: 1.0000e-05 - 360ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0273 - val_loss: 0.0275 - lr: 1.0000e-05 - 369ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0291 - val_loss: 0.0273 - lr: 1.0000e-05 - 437ms/epoch - 10ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0305 - val_loss: 0.0272 - lr: 1.0000e-05 - 377ms/epoch - 9ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0286 - val_loss: 0.0268 - lr: 1.0000e-05 - 366ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0312 - val_loss: 0.0267 - lr: 1.0000e-05 - 340ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0291 - val_loss: 0.0263 - lr: 1.0000e-05 - 384ms/epoch - 9ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0261 - val_loss: 0.0266 - lr: 1.0000e-05 - 490ms/epoch - 11ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0305 - val_loss: 0.0268 - lr: 1.0000e-05 - 397ms/epoch - 9ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0286 - val_loss: 0.0269 - lr: 1.0000e-05 - 462ms/epoch - 11ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0297 - val_loss: 0.0272 - lr: 1.0000e-05 - 378ms/epoch - 9ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0270 - val_loss: 0.0268 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0268 - val_loss: 0.0263 - lr: 1.0000e-05 - 346ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0267 - val_loss: 0.0263 - lr: 1.0000e-05 - 343ms/epoch - 8ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0335 - val_loss: 0.0261 - lr: 1.0000e-05 - 376ms/epoch - 9ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0313 - val_loss: 0.0257 - lr: 1.0000e-05 - 373ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0286 - val_loss: 0.0256 - lr: 1.0000e-05 - 380ms/epoch - 9ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0278 - val_loss: 0.0251 - lr: 1.0000e-05 - 428ms/epoch - 10ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0269 - val_loss: 0.0247 - lr: 1.0000e-05 - 341ms/epoch - 8ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0281 - val_loss: 0.0239 - lr: 1.0000e-05 - 365ms/epoch - 8ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0295 - val_loss: 0.0238 - lr: 1.0000e-05 - 362ms/epoch - 8ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0292 - val_loss: 0.0235 - lr: 1.0000e-05 - 403ms/epoch - 9ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0283 - val_loss: 0.0236 - lr: 1.0000e-05 - 412ms/epoch - 10ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0300 - val_loss: 0.0239 - lr: 1.0000e-05 - 397ms/epoch - 9ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0285 - val_loss: 0.0245 - lr: 1.0000e-05 - 357ms/epoch - 8ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0307 - val_loss: 0.0239 - lr: 1.0000e-05 - 381ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0274 - val_loss: 0.0236 - lr: 1.0000e-05 - 348ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0275 - val_loss: 0.0241 - lr: 1.0000e-05 - 367ms/epoch - 9ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0291 - val_loss: 0.0238 - lr: 1.0000e-05 - 359ms/epoch - 8ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0272 - val_loss: 0.0239 - lr: 1.0000e-05 - 345ms/epoch - 8ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0288 - val_loss: 0.0241 - lr: 1.0000e-05 - 411ms/epoch - 10ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.01627 43/43 - 0s - loss: 0.0290 - val_loss: 0.0245 - lr: 1.0000e-05 - 334ms/epoch - 8ms/step Epoch 00055: early stopping
SMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 52.61% Accuracy
MSE: 34.39169744803393
RMSE: 5.864443490053761
MAPE: 4.893666026892695
EMA
Prediction vs Close: 53.73% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 73.04930062485933
RMSE: 8.546888359213506
MAPE: 6.613879572809731
WMA
Prediction vs Close: 55.97% Accuracy
Prediction vs Prediction: 47.39% Accuracy
MSE: 70.35376938042184
RMSE: 8.387715385039114
MAPE: 6.8547592718484545
DEMA
Prediction vs Close: 52.61% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 70.24761196199488
RMSE: 8.381384847505505
MAPE: 6.862692730259403
KAMA
Prediction vs Close: 49.63% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 27.01407660930758
RMSE: 5.1975067685677505
MAPE: 4.263533603346384
MIDPOINT
Prediction vs Close: 50.37% Accuracy
Prediction vs Prediction: 46.64% Accuracy
MSE: 37.16076795716489
RMSE: 6.095963250969029
MAPE: 5.0853544537748006
T3
Prediction vs Close: 56.34% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 104.49209322707955
RMSE: 10.222137409909903
MAPE: 7.958642954509092
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16736.686, Time=3.56 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-15327.143, Time=3.41 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15166.078, Time=7.35 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14962.662, Time=14.62 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16731.606, Time=5.79 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14848.952, Time=10.56 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16921.745, Time=6.30 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14958.662, Time=17.92 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15003.046, Time=12.85 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16752.122, Time=4.25 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 86.638 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8492.873
Date: Sun, 12 Dec 2021 AIC -16921.745
Time: 16:57:41 BIC -16771.638
Sample: 0 HQIC -16864.098
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.277e-08 0.001 3.25e-05 1.000 -0.001 0.001
x2 2.286e-08 0.001 2.5e-05 1.000 -0.002 0.002
x3 2.286e-08 0.001 3.44e-05 1.000 -0.001 0.001
x4 1.0000 0.000 3190.279 0.000 0.999 1.001
x5 2.174e-08 0.001 4.21e-05 1.000 -0.001 0.001
x6 6.124e-09 3.05e-05 0.000 1.000 -5.97e-05 5.97e-05
x7 2.246e-08 0.001 1.67e-05 1.000 -0.003 0.003
x8 -0.0013 0.001 -1.669 0.095 -0.003 0.000
x9 -5.239e-09 0.000 -1.79e-05 1.000 -0.001 0.001
x10 0.0001 9.9e-05 1.396 0.163 -5.59e-05 0.000
x11 -0.0001 0.001 -0.177 0.859 -0.002 0.001
x12 0.0012 0.001 1.426 0.154 -0.000 0.003
x13 2.284e-08 0.000 6.75e-05 1.000 -0.001 0.001
x14 6.258e-08 0.001 5.07e-05 1.000 -0.002 0.002
x15 2.215e-08 0.000 0.000 1.000 -0.000 0.000
x16 3.243e-08 0.000 0.000 1.000 -0.001 0.001
x17 2.22e-08 0.000 0.000 1.000 -0.000 0.000
x18 7.527e-09 0.000 1.67e-05 1.000 -0.001 0.001
x19 2.477e-08 0.000 0.000 1.000 -0.000 0.000
x20 -2.348e-08 0.000 -5.78e-05 1.000 -0.001 0.001
x21 2.718e-08 5.8e-05 0.000 1.000 -0.000 0.000
x22 -2.176e-10 0.000 -5.27e-07 1.000 -0.001 0.001
x23 -2.69e-09 8.49e-05 -3.17e-05 1.000 -0.000 0.000
x24 -4.516e-08 7.24e-06 -0.006 0.995 -1.42e-05 1.41e-05
x25 -4.213e-08 2.81e-05 -0.002 0.999 -5.51e-05 5.5e-05
x26 7.946e-08 0.001 0.000 1.000 -0.001 0.001
x27 4.528e-08 0.001 6.21e-05 1.000 -0.001 0.001
x28 5.92e-08 0.001 4.12e-05 1.000 -0.003 0.003
x29 3.468e-08 0.000 7.06e-05 1.000 -0.001 0.001
ma.L1 -1.3739 4.46e-06 -3.08e+05 0.000 -1.374 -1.374
ma.L2 0.3968 1.4e-05 2.84e+04 0.000 0.397 0.397
sigma2 7.701e-11 7.39e-11 1.043 0.297 -6.78e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 61.47 Jarque-Bera (JB): 5565463.09
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 10.97
Prob(H) (two-sided): 0.00 Kurtosis: 409.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.67e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
WARNING:tensorflow:Layer lstm_64 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_64 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.06427, saving model to LSTM5.h5 90/90 - 2s - loss: 0.2083 - val_loss: 0.0643 - lr: 0.0010 - 2s/epoch - 23ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.06427 90/90 - 1s - loss: 0.1306 - val_loss: 0.1781 - lr: 0.0010 - 704ms/epoch - 8ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.06427 90/90 - 1s - loss: 0.0743 - val_loss: 0.5438 - lr: 0.0010 - 714ms/epoch - 8ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.06427 90/90 - 1s - loss: 0.0549 - val_loss: 0.2448 - lr: 0.0010 - 685ms/epoch - 8ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.06427 to 0.01407, saving model to LSTM5.h5 90/90 - 1s - loss: 0.0472 - val_loss: 0.0141 - lr: 0.0010 - 817ms/epoch - 9ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01407 90/90 - 1s - loss: 0.0396 - val_loss: 0.0237 - lr: 0.0010 - 774ms/epoch - 9ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.01407 90/90 - 1s - loss: 0.0371 - val_loss: 0.1705 - lr: 0.0010 - 868ms/epoch - 10ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01407 90/90 - 1s - loss: 0.0397 - val_loss: 0.0380 - lr: 0.0010 - 741ms/epoch - 8ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01407 90/90 - 1s - loss: 0.0356 - val_loss: 0.2012 - lr: 0.0010 - 734ms/epoch - 8ms/step Epoch 10/500 Epoch 00010: val_loss improved from 0.01407 to 0.00690, saving model to LSTM5.h5 90/90 - 1s - loss: 0.0314 - val_loss: 0.0069 - lr: 0.0010 - 761ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00690 90/90 - 1s - loss: 0.0374 - val_loss: 0.0092 - lr: 0.0010 - 813ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00690 90/90 - 1s - loss: 0.0362 - val_loss: 0.0083 - lr: 0.0010 - 756ms/epoch - 8ms/step Epoch 13/500 Epoch 00013: val_loss improved from 0.00690 to 0.00570, saving model to LSTM5.h5 90/90 - 1s - loss: 0.0285 - val_loss: 0.0057 - lr: 0.0010 - 757ms/epoch - 8ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0299 - val_loss: 0.0147 - lr: 0.0010 - 759ms/epoch - 8ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0336 - val_loss: 0.2741 - lr: 0.0010 - 740ms/epoch - 8ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0337 - val_loss: 0.0191 - lr: 0.0010 - 859ms/epoch - 10ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0376 - val_loss: 0.4605 - lr: 0.0010 - 703ms/epoch - 8ms/step Epoch 18/500 Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00018: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0255 - val_loss: 0.0977 - lr: 0.0010 - 708ms/epoch - 8ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0282 - val_loss: 0.0758 - lr: 1.0000e-04 - 752ms/epoch - 8ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0250 - val_loss: 0.0615 - lr: 1.0000e-04 - 673ms/epoch - 7ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0231 - val_loss: 0.0523 - lr: 1.0000e-04 - 697ms/epoch - 8ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0223 - val_loss: 0.0422 - lr: 1.0000e-04 - 688ms/epoch - 8ms/step Epoch 23/500 Epoch 00023: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00023: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0220 - val_loss: 0.0379 - lr: 1.0000e-04 - 680ms/epoch - 8ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0215 - val_loss: 0.0375 - lr: 1.0000e-05 - 746ms/epoch - 8ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0191 - val_loss: 0.0366 - lr: 1.0000e-05 - 808ms/epoch - 9ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0218 - val_loss: 0.0354 - lr: 1.0000e-05 - 830ms/epoch - 9ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0209 - val_loss: 0.0355 - lr: 1.0000e-05 - 721ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00028: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0198 - val_loss: 0.0355 - lr: 1.0000e-05 - 710ms/epoch - 8ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0180 - val_loss: 0.0356 - lr: 1.0000e-05 - 697ms/epoch - 8ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0207 - val_loss: 0.0347 - lr: 1.0000e-05 - 698ms/epoch - 8ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0196 - val_loss: 0.0345 - lr: 1.0000e-05 - 690ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0208 - val_loss: 0.0339 - lr: 1.0000e-05 - 724ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0222 - val_loss: 0.0330 - lr: 1.0000e-05 - 832ms/epoch - 9ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0196 - val_loss: 0.0327 - lr: 1.0000e-05 - 727ms/epoch - 8ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0216 - val_loss: 0.0317 - lr: 1.0000e-05 - 720ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0232 - val_loss: 0.0319 - lr: 1.0000e-05 - 691ms/epoch - 8ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0205 - val_loss: 0.0310 - lr: 1.0000e-05 - 755ms/epoch - 8ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0195 - val_loss: 0.0304 - lr: 1.0000e-05 - 679ms/epoch - 8ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0183 - val_loss: 0.0306 - lr: 1.0000e-05 - 686ms/epoch - 8ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0186 - val_loss: 0.0308 - lr: 1.0000e-05 - 727ms/epoch - 8ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0203 - val_loss: 0.0301 - lr: 1.0000e-05 - 684ms/epoch - 8ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0200 - val_loss: 0.0302 - lr: 1.0000e-05 - 700ms/epoch - 8ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0215 - val_loss: 0.0294 - lr: 1.0000e-05 - 712ms/epoch - 8ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0206 - val_loss: 0.0290 - lr: 1.0000e-05 - 710ms/epoch - 8ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0186 - val_loss: 0.0292 - lr: 1.0000e-05 - 680ms/epoch - 8ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0193 - val_loss: 0.0288 - lr: 1.0000e-05 - 781ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0191 - val_loss: 0.0284 - lr: 1.0000e-05 - 743ms/epoch - 8ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0223 - val_loss: 0.0278 - lr: 1.0000e-05 - 826ms/epoch - 9ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0182 - val_loss: 0.0274 - lr: 1.0000e-05 - 732ms/epoch - 8ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0181 - val_loss: 0.0267 - lr: 1.0000e-05 - 728ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0182 - val_loss: 0.0254 - lr: 1.0000e-05 - 745ms/epoch - 8ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0197 - val_loss: 0.0261 - lr: 1.0000e-05 - 753ms/epoch - 8ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0190 - val_loss: 0.0264 - lr: 1.0000e-05 - 736ms/epoch - 8ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0192 - val_loss: 0.0262 - lr: 1.0000e-05 - 723ms/epoch - 8ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0195 - val_loss: 0.0247 - lr: 1.0000e-05 - 751ms/epoch - 8ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0178 - val_loss: 0.0242 - lr: 1.0000e-05 - 734ms/epoch - 8ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0201 - val_loss: 0.0238 - lr: 1.0000e-05 - 712ms/epoch - 8ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0212 - val_loss: 0.0229 - lr: 1.0000e-05 - 780ms/epoch - 9ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0182 - val_loss: 0.0220 - lr: 1.0000e-05 - 729ms/epoch - 8ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0209 - val_loss: 0.0218 - lr: 1.0000e-05 - 726ms/epoch - 8ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0190 - val_loss: 0.0231 - lr: 1.0000e-05 - 781ms/epoch - 9ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0195 - val_loss: 0.0219 - lr: 1.0000e-05 - 731ms/epoch - 8ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.00570 90/90 - 1s - loss: 0.0187 - val_loss: 0.0208 - lr: 1.0000e-05 - 713ms/epoch - 8ms/step Epoch 00063: early stopping
SMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 52.61% Accuracy MSE: 34.39169744803393 RMSE: 5.864443490053761 MAPE: 4.893666026892695 EMA Prediction vs Close: 53.73% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 73.04930062485933 RMSE: 8.546888359213506 MAPE: 6.613879572809731 WMA Prediction vs Close: 55.97% Accuracy Prediction vs Prediction: 47.39% Accuracy MSE: 70.35376938042184 RMSE: 8.387715385039114 MAPE: 6.8547592718484545 DEMA Prediction vs Close: 52.61% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 70.24761196199488 RMSE: 8.381384847505505 MAPE: 6.862692730259403 KAMA Prediction vs Close: 49.63% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 27.01407660930758 RMSE: 5.1975067685677505 MAPE: 4.263533603346384 MIDPOINT Prediction vs Close: 50.37% Accuracy Prediction vs Prediction: 46.64% Accuracy MSE: 37.16076795716489 RMSE: 6.095963250969029 MAPE: 5.0853544537748006 T3 Prediction vs Close: 56.34% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 104.49209322707955 RMSE: 10.222137409909903 MAPE: 7.958642954509092 TEMA Prediction vs Close: 51.49% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 72.80283545670305 RMSE: 8.532457761788397 MAPE: 7.653550657820228 Runtime: mins: 57.24369045236666
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment5.png to Experiment5 (1).png
img = cv2.imread('Experiment5.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5ea16d7d0>
for i in range(len(list(simulation5.keys()))):
SIM = list(simulation5.keys())[i]
plot_train(simulation5,SIM)
plot_test(simulation5,SIM)
----- Train RMSE for SMA ----- 7.636357212019493 ----- Train_MSE_LSTM for SMA ----- 58.31395146956212 ----- Train MAE LSTM for SMA ----- 6.563417649903971
----- Test RMSE for SMA----- 5.864443490053761 ----- Test_MSE_LSTM for SMA----- 34.39169744803393 ----- Test_MAE_LSTM for SMA----- 4.893666026892695
----- Train RMSE for EMA ----- 9.308960691198413 ----- Train_MSE_LSTM for EMA ----- 86.65674915027724 ----- Train MAE LSTM for EMA ----- 8.140100172619418
----- Test RMSE for EMA----- 8.546888359213506 ----- Test_MSE_LSTM for EMA----- 73.04930062485933 ----- Test_MAE_LSTM for EMA----- 6.613879572809731
----- Train RMSE for WMA ----- 9.79972862979708 ----- Train_MSE_LSTM for WMA ----- 96.03468121766456 ----- Train MAE LSTM for WMA ----- 8.675562439114087
----- Test RMSE for WMA----- 8.387715385039114 ----- Test_MSE_LSTM for WMA----- 70.35376938042184 ----- Test_MAE_LSTM for WMA----- 6.8547592718484545
----- Train RMSE for DEMA ----- 11.043189962759177 ----- Train_MSE_LSTM for DEMA ----- 121.95204455358504 ----- Train MAE LSTM for DEMA ----- 9.832117659380973
----- Test RMSE for DEMA----- 8.381384847505505 ----- Test_MSE_LSTM for DEMA----- 70.24761196199488 ----- Test_MAE_LSTM for DEMA----- 6.862692730259403
----- Train RMSE for KAMA ----- 9.344372249039948 ----- Train_MSE_LSTM for KAMA ----- 87.3172927286279 ----- Train MAE LSTM for KAMA ----- 8.318259338837343
----- Test RMSE for KAMA----- 5.1975067685677505 ----- Test_MSE_LSTM for KAMA----- 27.01407660930758 ----- Test_MAE_LSTM for KAMA----- 4.263533603346384
----- Train RMSE for MIDPOINT ----- 8.517306058165431 ----- Train_MSE_LSTM for MIDPOINT ----- 72.54450248846156 ----- Train MAE LSTM for MIDPOINT ----- 7.583730384747793
----- Test RMSE for MIDPOINT----- 6.095963250969029 ----- Test_MSE_LSTM for MIDPOINT----- 37.16076795716489 ----- Test_MAE_LSTM for MIDPOINT----- 5.0853544537748006
----- Train RMSE for T3 ----- 10.856747466888622 ----- Train_MSE_LSTM for T3 ----- 117.86896555979253 ----- Train MAE LSTM for T3 ----- 9.766406240784287
----- Test RMSE for T3----- 10.222137409909903 ----- Test_MSE_LSTM for T3----- 104.49209322707955 ----- Test_MAE_LSTM for T3----- 7.958642954509092
----- Train RMSE for TEMA ----- 6.927888385272954 ----- Train_MSE_LSTM for TEMA ----- 47.9956374787999 ----- Train MAE LSTM for TEMA ----- 4.706679976581726
----- Test RMSE for TEMA----- 8.532457761788397 ----- Test_MSE_LSTM for TEMA----- 72.80283545670305 ----- Test_MAE_LSTM for TEMA----- 7.653550657820228
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det =20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# # option 2
model = Sequential()
model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
model.add(Dense(64))
model.add(Dense(units=output_dim))
model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM6.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation6 = {}
imgfile = 'Experiment6'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation6[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation6_data.json', 'w') as fp:
json.dump(simulation6, fp)
for ma in simulation6.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation6[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation6[ma]['final']['mse'],
'\nRMSE:\t', simulation6[ma]['final']['rmse'],
'\nMAPE:\t', simulation6[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.787, Time=3.67 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.588, Time=5.52 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14596.280, Time=5.65 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.588, Time=8.39 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16924.805, Time=10.47 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14482.349, Time=11.23 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17215.608, Time=20.61 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.588, Time=10.76 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15570.350, Time=19.25 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11671.292, Time=28.75 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 124.319 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8639.804
Date: Sun, 12 Dec 2021 AIC -17215.608
Time: 17:11:20 BIC -17065.501
Sample: 0 HQIC -17157.961
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.057e-09 5.82e-05 -6.97e-05 1.000 -0.000 0.000
x2 -4.057e-09 5.81e-05 -6.99e-05 1.000 -0.000 0.000
x3 -4.111e-09 5.49e-05 -7.49e-05 1.000 -0.000 0.000
x4 1.0000 5.71e-05 1.75e+04 0.000 1.000 1.000
x5 -3.706e-09 5.43e-05 -6.82e-05 1.000 -0.000 0.000
x6 -1.082e-08 0.000 -6.08e-05 1.000 -0.000 0.000
x7 -4.025e-09 5.63e-05 -7.15e-05 1.000 -0.000 0.000
x8 -4.035e-09 5.19e-05 -7.78e-05 1.000 -0.000 0.000
x9 -1.522e-10 2.9e-05 -5.25e-06 1.000 -5.68e-05 5.68e-05
x10 -6.396e-10 1.04e-05 -6.15e-05 1.000 -2.04e-05 2.04e-05
x11 -3.921e-09 5.06e-05 -7.75e-05 1.000 -9.91e-05 9.91e-05
x12 -4.102e-09 5.29e-05 -7.76e-05 1.000 -0.000 0.000
x13 -4.087e-09 5.75e-05 -7.11e-05 1.000 -0.000 0.000
x14 -3.619e-08 0.000 -0.000 1.000 -0.000 0.000
x15 -4.806e-09 4.61e-05 -0.000 1.000 -9.03e-05 9.03e-05
x16 -3.507e-09 0.000 -2.98e-05 1.000 -0.000 0.000
x17 -3.121e-09 6.02e-05 -5.18e-05 1.000 -0.000 0.000
x18 -1.172e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -5.433e-09 6.06e-05 -8.96e-05 1.000 -0.000 0.000
x20 -1.393e-08 4.79e-05 -0.000 1.000 -9.39e-05 9.39e-05
x21 -4.216e-09 6.63e-05 -6.36e-05 1.000 -0.000 0.000
x22 -3.479e-11 1.66e-08 -0.002 0.998 -3.25e-08 3.24e-08
x23 -9.221e-10 1.4e-07 -0.007 0.995 -2.74e-07 2.73e-07
x24 -8.085e-08 0.001 -6.96e-05 1.000 -0.002 0.002
x25 -9.642e-08 0.001 -0.000 1.000 -0.002 0.002
x26 -5.019e-08 0.000 -0.000 1.000 -0.000 0.000
x27 -2.457e-08 7.65e-05 -0.000 1.000 -0.000 0.000
x28 -3.411e-08 0.000 -0.000 1.000 -0.000 0.000
x29 -1.507e-08 4.36e-05 -0.000 1.000 -8.54e-05 8.54e-05
ma.L1 -1.3898 8.03e-07 -1.73e+06 0.000 -1.390 -1.390
ma.L2 0.4031 8.36e-07 4.82e+05 0.000 0.403 0.403
sigma2 7.528e-11 7.24e-11 1.040 0.298 -6.66e-11 2.17e-10
===================================================================================
Ljung-Box (L1) (Q): 89.12 Jarque-Bera (JB): 1533103.33
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.56
Prob(H) (two-sided): 0.00 Kurtosis: 216.50
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.08e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.04685, saving model to LSTM6.h5 48/48 - 3s - loss: 0.1506 - accuracy: 0.0000e+00 - val_loss: 0.0468 - val_accuracy: 0.0037 - lr: 0.0010 - 3s/epoch - 71ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.04685 to 0.00855, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0267 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 0.0010 - 291ms/epoch - 6ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0284 - accuracy: 0.0000e+00 - val_loss: 0.0544 - val_accuracy: 0.0037 - lr: 0.0010 - 249ms/epoch - 5ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0260 - accuracy: 0.0000e+00 - val_loss: 0.0121 - val_accuracy: 0.0037 - lr: 0.0010 - 273ms/epoch - 6ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0112 - accuracy: 0.0000e+00 - val_loss: 0.0529 - val_accuracy: 0.0037 - lr: 0.0010 - 283ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0128 - accuracy: 0.0000e+00 - val_loss: 0.0290 - val_accuracy: 0.0037 - lr: 0.0010 - 252ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0119 - accuracy: 0.0000e+00 - val_loss: 0.0662 - val_accuracy: 0.0037 - lr: 0.0010 - 279ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0206 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 303ms/epoch - 6ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00855 48/48 - 0s - loss: 0.0037 - accuracy: 0.0000e+00 - val_loss: 0.0127 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 267ms/epoch - 6ms/step Epoch 10/500 Epoch 00010: val_loss improved from 0.00855 to 0.00839, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 275ms/epoch - 6ms/step Epoch 11/500 Epoch 00011: val_loss improved from 0.00839 to 0.00607, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0026 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 277ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.00607 to 0.00490, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 290ms/epoch - 6ms/step Epoch 13/500 Epoch 00013: val_loss improved from 0.00490 to 0.00436, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 319ms/epoch - 7ms/step Epoch 14/500 Epoch 00014: val_loss improved from 0.00436 to 0.00414, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 269ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: val_loss improved from 0.00414 to 0.00405, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 319ms/epoch - 7ms/step Epoch 16/500 Epoch 00016: val_loss improved from 0.00405 to 0.00401, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 288ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss improved from 0.00401 to 0.00399, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 284ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss improved from 0.00399 to 0.00395, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 298ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss improved from 0.00395 to 0.00392, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 289ms/epoch - 6ms/step Epoch 20/500 Epoch 00020: val_loss improved from 0.00392 to 0.00388, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 295ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss improved from 0.00388 to 0.00383, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 283ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: val_loss improved from 0.00383 to 0.00379, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 301ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss improved from 0.00379 to 0.00375, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 298ms/epoch - 6ms/step Epoch 24/500 Epoch 00024: val_loss improved from 0.00375 to 0.00371, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 322ms/epoch - 7ms/step Epoch 25/500 Epoch 00025: val_loss improved from 0.00371 to 0.00368, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 271ms/epoch - 6ms/step Epoch 26/500 Epoch 00026: val_loss improved from 0.00368 to 0.00365, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 314ms/epoch - 7ms/step Epoch 27/500 Epoch 00027: val_loss improved from 0.00365 to 0.00362, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 293ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss improved from 0.00362 to 0.00360, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 290ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss improved from 0.00360 to 0.00359, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 326ms/epoch - 7ms/step Epoch 30/500 Epoch 00030: val_loss improved from 0.00359 to 0.00358, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 269ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00031: val_loss improved from 0.00358 to 0.00358, saving model to LSTM6.h5 48/48 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 304ms/epoch - 6ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.2863e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.2266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.2009e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.1823e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00036: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.1660e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.1507e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.1360e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.1219e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.1081e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0946e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 316ms/epoch - 7ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0813e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 299ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0682e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0553e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 6ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0426e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0299e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0173e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00358 48/48 - 0s - loss: 9.0048e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9923e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9799e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9674e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 6ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9550e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9426e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9302e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 5ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9177e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.9053e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8928e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8802e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 5ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8677e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8551e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8424e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8297e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8169e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step Epoch 64/500 Epoch 00064: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.8040e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 6ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7911e-04 - accuracy: 0.0000e+00 - val_loss: 0.0040 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step Epoch 66/500 Epoch 00066: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7781e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 67/500 Epoch 00067: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7650e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 6ms/step Epoch 68/500 Epoch 00068: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7518e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 69/500 Epoch 00069: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7385e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step Epoch 70/500 Epoch 00070: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7252e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step Epoch 71/500 Epoch 00071: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.7117e-04 - accuracy: 0.0000e+00 - val_loss: 0.0041 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step Epoch 72/500 Epoch 00072: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6981e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step Epoch 73/500 Epoch 00073: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6844e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step Epoch 74/500 Epoch 00074: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6707e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step Epoch 75/500 Epoch 00075: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6568e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 6ms/step Epoch 76/500 Epoch 00076: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6427e-04 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step Epoch 77/500 Epoch 00077: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6286e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 6ms/step Epoch 78/500 Epoch 00078: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6144e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 6ms/step Epoch 79/500 Epoch 00079: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.6000e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 6ms/step Epoch 80/500 Epoch 00080: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.5855e-04 - accuracy: 0.0000e+00 - val_loss: 0.0043 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step Epoch 81/500 Epoch 00081: val_loss did not improve from 0.00358 48/48 - 0s - loss: 8.5709e-04 - accuracy: 0.0000e+00 - val_loss: 0.0044 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 6ms/step Epoch 00081: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.45 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.48 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15952.568, Time=15.38 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=8.39 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16628.634, Time=10.82 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16462.206, Time=25.75 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16848.298, Time=13.21 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.023, Time=6.97 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.619, Time=3.98 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=8.34 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=18.69 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.994, Time=4.30 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.667, Time=4.98 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 129.764 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 17:17:37 BIC -16911.966
Sample: 0 HQIC -17010.204
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.316e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x2 -2.309e-10 6.24e-05 -3.7e-06 1.000 -0.000 0.000
x3 -2.325e-10 6.26e-05 -3.71e-06 1.000 -0.000 0.000
x4 1.0000 6.25e-05 1.6e+04 0.000 1.000 1.000
x5 -2.107e-10 5.96e-05 -3.54e-06 1.000 -0.000 0.000
x6 -7.997e-10 0.000 -7.41e-06 1.000 -0.000 0.000
x7 -2.295e-10 6.22e-05 -3.69e-06 1.000 -0.000 0.000
x8 -2.246e-10 6.15e-05 -3.65e-06 1.000 -0.000 0.000
x9 -1.167e-11 1.25e-05 -9.33e-07 1.000 -2.45e-05 2.45e-05
x10 -4.454e-11 2.66e-05 -1.68e-06 1.000 -5.21e-05 5.21e-05
x11 -2.221e-10 6.11e-05 -3.63e-06 1.000 -0.000 0.000
x12 -2.266e-10 6.18e-05 -3.66e-06 1.000 -0.000 0.000
x13 -2.315e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x14 -1.767e-09 0.000 -1.02e-05 1.000 -0.000 0.000
x15 -2.11e-10 5.93e-05 -3.56e-06 1.000 -0.000 0.000
x16 -5.283e-10 9.45e-05 -5.59e-06 1.000 -0.000 0.000
x17 -2.098e-10 6.01e-05 -3.49e-06 1.000 -0.000 0.000
x18 -3.82e-11 2.41e-05 -1.58e-06 1.000 -4.73e-05 4.73e-05
x19 -2.645e-10 6.61e-05 -4e-06 1.000 -0.000 0.000
x20 -2.417e-10 6.21e-05 -3.89e-06 1.000 -0.000 0.000
x21 -4.824e-10 8.83e-05 -5.46e-06 1.000 -0.000 0.000
x22 -3.758e-13 1.19e-11 -0.032 0.975 -2.36e-11 2.29e-11
x23 -1.089e-11 8.42e-11 -0.129 0.897 -1.76e-10 1.54e-10
x24 -2.538e-09 0.000 -1.44e-05 1.000 -0.000 0.000
x25 -2.038e-09 0.000 -1.49e-05 1.000 -0.000 0.000
x26 -3.16e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x27 -2.955e-09 0.000 -1.32e-05 1.000 -0.000 0.000
x28 -1.664e-09 0.000 -9.94e-06 1.000 -0.000 0.000
x29 -1.568e-09 0.000 -9.63e-06 1.000 -0.000 0.000
ar.L1 -0.4923 6.2e-10 -7.94e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 3.6e-10 -5.35e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.71e-10 -2.71e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.41e-09 -5.04e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 51.79 Jarque-Bera (JB): 4012066.18
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.44
Prob(H) (two-sided): 0.00 Kurtosis: 348.68
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.00799, saving model to LSTM6.h5 16/16 - 3s - loss: 0.0776 - accuracy: 0.0000e+00 - val_loss: 0.0080 - val_accuracy: 0.0037 - lr: 0.0010 - 3s/epoch - 218ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.00799 16/16 - 0s - loss: 0.0318 - accuracy: 0.0000e+00 - val_loss: 0.1262 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 100ms/epoch - 6ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00799 16/16 - 0s - loss: 0.0517 - accuracy: 0.0000e+00 - val_loss: 0.0132 - val_accuracy: 0.0037 - lr: 0.0010 - 113ms/epoch - 7ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.00799 to 0.00501, saving model to LSTM6.h5 16/16 - 0s - loss: 0.0237 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 0.0010 - 135ms/epoch - 8ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0243 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 0.0010 - 105ms/epoch - 7ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0106 - accuracy: 0.0000e+00 - val_loss: 0.0249 - val_accuracy: 0.0037 - lr: 0.0010 - 104ms/epoch - 7ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0154 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 7ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0048 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 0.0010 - 119ms/epoch - 7ms/step Epoch 9/500 Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00009: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0070 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 0.0010 - 110ms/epoch - 7ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 100ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 120ms/epoch - 7ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 114ms/epoch - 7ms/step Epoch 14/500 Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00014: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 121ms/epoch - 8ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00019: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 127ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 120ms/epoch - 8ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00501 16/16 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step Epoch 00054: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.28436187942754
RMSE: 8.383576914386099
MAPE: 6.876111393338704
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.57 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.37 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14597.576, Time=5.63 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=8.21 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15338.693, Time=11.46 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15153.472, Time=27.75 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17112.658, Time=16.25 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.587, Time=10.68 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15106.216, Time=14.92 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-12251.715, Time=36.41 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 140.271 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8588.329
Date: Sun, 12 Dec 2021 AIC -17112.658
Time: 17:28:56 BIC -16962.551
Sample: 0 HQIC -17055.011
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.53e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x2 -4.512e-09 3.25e-06 -0.001 0.999 -6.38e-06 6.37e-06
x3 -4.538e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x4 1.0000 3.26e-06 3.07e+05 0.000 1.000 1.000
x5 -4.105e-09 3.11e-06 -0.001 0.999 -6.1e-06 6.09e-06
x6 -1.488e-08 5.45e-06 -0.003 0.998 -1.07e-05 1.07e-05
x7 -4.481e-09 3.24e-06 -0.001 0.999 -6.36e-06 6.36e-06
x8 -4.365e-09 3.2e-06 -0.001 0.999 -6.29e-06 6.28e-06
x9 -4.628e-10 8.38e-07 -0.001 1.000 -1.64e-06 1.64e-06
x10 -7.326e-10 1.3e-06 -0.001 1.000 -2.55e-06 2.54e-06
x11 -4.347e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x12 -4.345e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x13 -4.52e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x14 -3.586e-08 9e-06 -0.004 0.997 -1.77e-05 1.76e-05
x15 -3.757e-09 2.98e-06 -0.001 0.999 -5.84e-06 5.83e-06
x16 -1.24e-08 5.36e-06 -0.002 0.998 -1.05e-05 1.05e-05
x17 -4.515e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x18 -2.632e-10 7.07e-07 -0.000 1.000 -1.39e-06 1.39e-06
x19 -4.642e-09 3.3e-06 -0.001 0.999 -6.47e-06 6.46e-06
x20 -3.919e-10 6.91e-07 -0.001 1.000 -1.36e-06 1.35e-06
x21 -7.69e-09 4.13e-06 -0.002 0.999 -8.11e-06 8.09e-06
x22 -6.998e-12 2.69e-13 -25.970 0.000 -7.53e-12 -6.47e-12
x23 -1.81e-10 2.22e-12 -81.582 0.000 -1.85e-10 -1.77e-10
x24 -4.955e-08 8.9e-06 -0.006 0.996 -1.75e-05 1.74e-05
x25 -4.901e-08 8.4e-06 -0.006 0.995 -1.65e-05 1.64e-05
x26 -6.446e-08 1.2e-05 -0.005 0.996 -2.37e-05 2.35e-05
x27 -5.73e-08 1.14e-05 -0.005 0.996 -2.24e-05 2.23e-05
x28 -2.997e-08 8.22e-06 -0.004 0.997 -1.61e-05 1.61e-05
x29 -3.486e-08 8.89e-06 -0.004 0.997 -1.75e-05 1.74e-05
ma.L1 -1.3902 3.62e-10 -3.84e+09 0.000 -1.390 -1.390
ma.L2 0.4033 3.72e-10 1.08e+09 0.000 0.403 0.403
sigma2 8.541e-11 6.95e-11 1.229 0.219 -5.08e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 66.92 Jarque-Bera (JB): 6039240.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.14
Prob(H) (two-sided): 0.00 Kurtosis: 426.63
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.94e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.12839, saving model to LSTM6.h5 17/17 - 4s - loss: 0.1323 - accuracy: 0.0000e+00 - val_loss: 0.1284 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 231ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.12839 to 0.10391, saving model to LSTM6.h5 17/17 - 0s - loss: 0.0454 - accuracy: 0.0000e+00 - val_loss: 0.1039 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 146ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.10391 to 0.00557, saving model to LSTM6.h5 17/17 - 0s - loss: 0.0245 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 0.0010 - 144ms/epoch - 8ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0150 - accuracy: 0.0000e+00 - val_loss: 0.0089 - val_accuracy: 0.0037 - lr: 0.0010 - 106ms/epoch - 6ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0242 - accuracy: 0.0000e+00 - val_loss: 0.0156 - val_accuracy: 0.0037 - lr: 0.0010 - 110ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0181 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 0.0010 - 107ms/epoch - 6ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0165 - accuracy: 0.0000e+00 - val_loss: 0.0437 - val_accuracy: 0.0037 - lr: 0.0010 - 121ms/epoch - 7ms/step Epoch 8/500 Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00008: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0416 - accuracy: 0.0000e+00 - val_loss: 0.0587 - val_accuracy: 0.0037 - lr: 0.0010 - 124ms/epoch - 7ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0598 - accuracy: 0.0000e+00 - val_loss: 0.0388 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 134ms/epoch - 8ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0068 - accuracy: 0.0000e+00 - val_loss: 0.0267 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 117ms/epoch - 7ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0052 - accuracy: 0.0000e+00 - val_loss: 0.0229 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 130ms/epoch - 8ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0204 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 123ms/epoch - 7ms/step Epoch 13/500 Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00013: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 112ms/epoch - 7ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0173 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0171 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0169 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0166 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step Epoch 18/500 Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00018: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0164 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0162 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0157 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0153 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0148 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0146 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 138ms/epoch - 8ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0021 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 121ms/epoch - 7ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0142 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0135 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 129ms/epoch - 8ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0133 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 128ms/epoch - 8ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0131 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0128 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0124 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0122 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0120 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0117 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 133ms/epoch - 8ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0115 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0113 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 110ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0109 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0106 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 131ms/epoch - 8ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0103 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 139ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 125ms/epoch - 7ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00557 17/17 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0099 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step Epoch 00053: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.28436187942754
RMSE: 8.383576914386099
MAPE: 6.876111393338704
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.57086226636761
RMSE: 8.400646538592587
MAPE: 6.6664001460728475
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.776, Time=3.85 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.586, Time=5.60 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16271.755, Time=7.45 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.586, Time=8.32 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15152.908, Time=11.62 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14481.105, Time=13.78 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16088.109, Time=23.23 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.021, Time=6.97 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.615, Time=3.80 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=8.00 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=20.06 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.981, Time=4.57 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.666, Time=5.26 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 122.542 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 17:35:26 BIC -16911.965
Sample: 0 HQIC -17010.203
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 6.02e-05 -4.65e-06 1.000 -0.000 0.000
x2 -2.817e-10 6.04e-05 -4.66e-06 1.000 -0.000 0.000
x3 -2.805e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x4 1.0000 6.03e-05 1.66e+04 0.000 1.000 1.000
x5 -2.6e-10 5.8e-05 -4.48e-06 1.000 -0.000 0.000
x6 -1.389e-09 0.000 -1.08e-05 1.000 -0.000 0.000
x7 -2.789e-10 6.01e-05 -4.64e-06 1.000 -0.000 0.000
x8 -2.763e-10 5.99e-05 -4.62e-06 1.000 -0.000 0.000
x9 -2.224e-12 1.6e-06 -1.39e-06 1.000 -3.13e-06 3.13e-06
x10 -1.345e-10 4.12e-05 -3.26e-06 1.000 -8.08e-05 8.08e-05
x11 -2.9e-10 6.12e-05 -4.74e-06 1.000 -0.000 0.000
x12 -2.602e-10 5.82e-05 -4.47e-06 1.000 -0.000 0.000
x13 -2.807e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x14 -1.87e-09 0.000 -1.2e-05 1.000 -0.000 0.000
x15 -2.844e-10 6.05e-05 -4.7e-06 1.000 -0.000 0.000
x16 -7.962e-11 3.2e-05 -2.48e-06 1.000 -6.28e-05 6.28e-05
x17 -2.445e-10 5.61e-05 -4.36e-06 1.000 -0.000 0.000
x18 -6.4e-10 9.15e-05 -6.99e-06 1.000 -0.000 0.000
x19 -2.923e-10 6.14e-05 -4.76e-06 1.000 -0.000 0.000
x20 -4.336e-10 7.41e-05 -5.86e-06 1.000 -0.000 0.000
x21 -4.55e-10 7.5e-05 -6.07e-06 1.000 -0.000 0.000
x22 -3.587e-13 1.42e-11 -0.025 0.980 -2.82e-11 2.75e-11
x23 -1.088e-11 9.56e-11 -0.114 0.909 -1.98e-10 1.76e-10
x24 -2.146e-09 0.000 -1.63e-05 1.000 -0.000 0.000
x25 -1.637e-09 0.000 -1.35e-05 1.000 -0.000 0.000
x26 -3.147e-09 0.000 -1.56e-05 1.000 -0.000 0.000
x27 -2.58e-09 0.000 -1.41e-05 1.000 -0.000 0.000
x28 -2.444e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x29 -1.666e-09 0.000 -1.13e-05 1.000 -0.000 0.000
ar.L1 -0.4923 5.1e-10 -9.65e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 2.96e-10 -6.49e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.4e-10 -3.29e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.16e-09 -6.12e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.06 Jarque-Bera (JB): 4126495.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.48
Prob(H) (two-sided): 0.00 Kurtosis: 353.58
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.02054, saving model to LSTM6.h5 10/10 - 4s - loss: 0.2598 - accuracy: 0.0000e+00 - val_loss: 0.0205 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 394ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.02054 10/10 - 0s - loss: 0.1598 - accuracy: 0.0000e+00 - val_loss: 0.0390 - val_accuracy: 0.0037 - lr: 0.0010 - 85ms/epoch - 9ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.02054 to 0.01761, saving model to LSTM6.h5 10/10 - 0s - loss: 0.1513 - accuracy: 0.0000e+00 - val_loss: 0.0176 - val_accuracy: 0.0037 - lr: 0.0010 - 125ms/epoch - 13ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.01761 to 0.00421, saving model to LSTM6.h5 10/10 - 0s - loss: 0.0546 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 0.0010 - 123ms/epoch - 12ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0137 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 0.0010 - 80ms/epoch - 8ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0130 - val_accuracy: 0.0037 - lr: 0.0010 - 84ms/epoch - 8ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0039 - accuracy: 0.0000e+00 - val_loss: 0.0112 - val_accuracy: 0.0037 - lr: 0.0010 - 77ms/epoch - 8ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 0.0010 - 83ms/epoch - 8ms/step Epoch 9/500 Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00009: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 0.0010 - 91ms/epoch - 9ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 85ms/epoch - 8ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 88ms/epoch - 9ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 96ms/epoch - 10ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 93ms/epoch - 9ms/step Epoch 14/500 Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00014: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 89ms/epoch - 9ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step Epoch 19/500 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00019: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 98ms/epoch - 10ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 100ms/epoch - 10ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 103ms/epoch - 10ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 79ms/epoch - 8ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00421 10/10 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 86ms/epoch - 9ms/step Epoch 00054: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.28436187942754
RMSE: 8.383576914386099
MAPE: 6.876111393338704
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.57086226636761
RMSE: 8.400646538592587
MAPE: 6.6664001460728475
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 329.6035699397079
RMSE: 18.15498746735199
MAPE: 16.799244301034683
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.104, Time=4.07 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.591, Time=5.58 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16779.655, Time=11.30 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.590, Time=8.28 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16989.430, Time=4.15 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16990.286, Time=4.14 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.543, Time=3.85 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-16987.154, Time=4.23 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-16533.935, Time=16.23 sec
Best model: ARIMA(2,3,0)(0,0,0)[0]
Total fit time: 61.869 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(2, 3, 0) Log Likelihood 8527.143
Date: Sun, 12 Dec 2021 AIC -16990.286
Time: 17:45:39 BIC -16840.179
Sample: 0 HQIC -16932.639
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.1e-16 nan nan nan nan nan
x2 -3.811e-16 -0 inf 0.000 -3.81e-16 -3.81e-16
x3 8.776e-16 4.38e-27 2e+11 0.000 8.78e-16 8.78e-16
x4 1.0000 4.36e-27 2.29e+26 0.000 1.000 1.000
x5 6.686e-16 4.14e-27 1.61e+11 0.000 6.69e-16 6.69e-16
x6 -5.238e-17 9.44e-27 -5.55e+09 0.000 -5.24e-17 -5.24e-17
x7 -1.709e-16 4.37e-27 -3.91e+10 0.000 -1.71e-16 -1.71e-16
x8 1.439e-15 4.33e-27 3.32e+11 0.000 1.44e-15 1.44e-15
x9 -2.924e-16 5.73e-28 -5.1e+11 0.000 -2.92e-16 -2.92e-16
x10 -1.028e-16 1.78e-27 -5.76e+10 0.000 -1.03e-16 -1.03e-16
x11 -4.338e-16 4.31e-27 -1.01e+11 0.000 -4.34e-16 -4.34e-16
x12 1.72e-16 4.33e-27 3.97e+10 0.000 1.72e-16 1.72e-16
x13 -3.011e-16 4.36e-27 -6.91e+10 0.000 -3.01e-16 -3.01e-16
x14 -2.611e-16 1.27e-26 -2.06e+10 0.000 -2.61e-16 -2.61e-16
x15 1.53e-14 4.46e-27 3.43e+12 0.000 1.53e-14 1.53e-14
x16 -1.401e-14 5.45e-27 -2.57e+12 0.000 -1.4e-14 -1.4e-14
x17 2.316e-14 4.12e-27 5.62e+12 0.000 2.32e-14 2.32e-14
x18 -3.727e-15 3.71e-27 -1.01e+12 0.000 -3.73e-15 -3.73e-15
x19 -1.361e-14 4.94e-27 -2.75e+12 0.000 -1.36e-14 -1.36e-14
x20 -5.277e-15 6.08e-27 -8.68e+11 0.000 -5.28e-15 -5.28e-15
x21 1.178e-18 3.12e-27 3.77e+08 0.000 1.18e-18 1.18e-18
x22 -8.779e-17 1.74e-29 -5.05e+12 0.000 -8.78e-17 -8.78e-17
x23 3.183e-17 5.91e-29 5.39e+11 0.000 3.18e-17 3.18e-17
x24 -1.683e-16 1.41e-26 -1.19e+10 0.000 -1.68e-16 -1.68e-16
x25 8.988e-17 1.48e-30 6.08e+13 0.000 8.99e-17 8.99e-17
x26 4.435e-17 1.58e-26 2.8e+09 0.000 4.44e-17 4.44e-17
x27 1.538e-16 8.87e-27 1.73e+10 0.000 1.54e-16 1.54e-16
x28 1.635e-16 1.22e-26 1.34e+10 0.000 1.63e-16 1.63e-16
x29 1.474e-16 6.34e-27 2.33e+10 0.000 1.47e-16 1.47e-16
ar.L1 -0.9879 1.21e-22 -8.16e+21 0.000 -0.988 -0.988
ar.L2 -0.4879 1.29e-22 -3.79e+21 0.000 -0.488 -0.488
sigma2 1e-10 6.99e-11 1.432 0.152 -3.69e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 57.29 Jarque-Bera (JB): 559955.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.13 Skew: 0.64
Prob(H) (two-sided): 0.00 Kurtosis: 132.20
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number inf. Standard errors may be unstable.
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/mlemodel.py:2968: RuntimeWarning: divide by zero encountered in true_divide return self.params / self.bse
ARIMA order: (2, 3, 0)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.03815, saving model to LSTM6.h5 45/45 - 4s - loss: 0.1761 - accuracy: 0.0000e+00 - val_loss: 0.0382 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 94ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.03815 to 0.00774, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0287 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 0.0010 - 275ms/epoch - 6ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00774 45/45 - 0s - loss: 0.0239 - accuracy: 0.0000e+00 - val_loss: 0.0861 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 264ms/epoch - 6ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00774 45/45 - 0s - loss: 0.0472 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 0.0010 - 292ms/epoch - 6ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00774 45/45 - 0s - loss: 0.0134 - accuracy: 0.0000e+00 - val_loss: 0.0643 - val_accuracy: 0.0037 - lr: 0.0010 - 275ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00774 45/45 - 0s - loss: 0.0169 - accuracy: 0.0000e+00 - val_loss: 0.0183 - val_accuracy: 0.0037 - lr: 0.0010 - 289ms/epoch - 6ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.00774 45/45 - 0s - loss: 0.0102 - accuracy: 0.0000e+00 - val_loss: 0.0353 - val_accuracy: 0.0037 - lr: 0.0010 - 265ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00774 45/45 - 0s - loss: 0.0220 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 279ms/epoch - 6ms/step Epoch 9/500 Epoch 00009: val_loss improved from 0.00774 to 0.00573, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0028 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 285ms/epoch - 6ms/step Epoch 10/500 Epoch 00010: val_loss improved from 0.00573 to 0.00486, saving model to LSTM6.h5 45/45 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 282ms/epoch - 6ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 286ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 256ms/epoch - 6ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 279ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 281ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00015: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 259ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step Epoch 20/500 Epoch 00020: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00020: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 6ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 6ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 301ms/epoch - 7ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 7ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00486 45/45 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step Epoch 00060: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.28436187942754
RMSE: 8.383576914386099
MAPE: 6.876111393338704
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.57086226636761
RMSE: 8.400646538592587
MAPE: 6.6664001460728475
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 329.6035699397079
RMSE: 18.15498746735199
MAPE: 16.799244301034683
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 103.27437965196852
RMSE: 10.162400289890599
MAPE: 8.510636158449836
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.238, Time=3.61 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.578, Time=5.53 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16746.296, Time=8.58 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.578, Time=8.23 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16987.591, Time=3.89 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16395.520, Time=13.75 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17063.555, Time=13.49 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.578, Time=10.90 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16082.554, Time=21.32 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-15249.608, Time=19.15 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 108.468 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8563.778
Date: Sun, 12 Dec 2021 AIC -17063.555
Time: 17:49:18 BIC -16913.448
Sample: 0 HQIC -17005.908
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.495e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x2 -1.485e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x3 -1.518e-10 0.000 -1.21e-06 1.000 -0.000 0.000
x4 1.0000 0.000 8075.329 0.000 1.000 1.000
x5 -1.356e-10 0.000 -1.15e-06 1.000 -0.000 0.000
x6 -2.861e-09 0.000 -2.38e-05 1.000 -0.000 0.000
x7 -1.374e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x8 -1.371e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x9 -7.133e-11 7.1e-06 -1.01e-05 1.000 -1.39e-05 1.39e-05
x10 -1.23e-10 4.21e-05 -2.92e-06 1.000 -8.24e-05 8.24e-05
x11 -1.357e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x12 -1.401e-10 0.000 -1.11e-06 1.000 -0.000 0.000
x13 -1.436e-10 0.000 -1.16e-06 1.000 -0.000 0.000
x14 -1.179e-09 0.000 -3.22e-06 1.000 -0.001 0.001
x15 -1.651e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x16 -1.064e-10 0.000 -9.62e-07 1.000 -0.000 0.000
x17 -1.041e-10 0.000 -9.53e-07 1.000 -0.000 0.000
x18 -4.477e-10 0.000 -1.99e-06 1.000 -0.000 0.000
x19 -1.816e-10 0.000 -1.26e-06 1.000 -0.000 0.000
x20 -4.37e-10 0.000 -1.96e-06 1.000 -0.000 0.000
x21 -1.371e-09 9.1e-05 -1.51e-05 1.000 -0.000 0.000
x22 -1.059e-11 nan nan nan nan nan
x23 -9.902e-11 3.83e-09 -0.026 0.979 -7.61e-09 7.41e-09
x24 -5.521e-09 0.000 -1.34e-05 1.000 -0.001 0.001
x25 -4.621e-09 6.42e-05 -7.2e-05 1.000 -0.000 0.000
x26 -1.587e-09 0.000 -3.73e-06 1.000 -0.001 0.001
x27 -8.504e-10 0.000 -2.79e-06 1.000 -0.001 0.001
x28 -1.122e-09 0.000 -3.14e-06 1.000 -0.001 0.001
x29 -6.091e-10 0.000 -2.45e-06 1.000 -0.000 0.000
ma.L1 -1.3318 7.32e-07 -1.82e+06 0.000 -1.332 -1.332
ma.L2 0.3767 7.56e-07 4.98e+05 0.000 0.377 0.377
sigma2 9.093e-11 6.97e-11 1.304 0.192 -4.57e-11 2.28e-10
===================================================================================
Ljung-Box (L1) (Q): 76.00 Jarque-Bera (JB): 304933.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 1.65
Prob(H) (two-sided): 0.00 Kurtosis: 98.29
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.19e+28. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.03989, saving model to LSTM6.h5 58/58 - 4s - loss: 0.2600 - accuracy: 0.0000e+00 - val_loss: 0.0399 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 65ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.03989 to 0.00457, saving model to LSTM6.h5 58/58 - 0s - loss: 0.0229 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 0.0010 - 370ms/epoch - 6ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0133 - accuracy: 0.0000e+00 - val_loss: 0.0159 - val_accuracy: 0.0037 - lr: 0.0010 - 353ms/epoch - 6ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0097 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 0.0010 - 341ms/epoch - 6ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 0.0010 - 343ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0107 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 0.0010 - 299ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0108 - accuracy: 0.0000e+00 - val_loss: 0.0140 - val_accuracy: 0.0037 - lr: 0.0010 - 310ms/epoch - 5ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0247 - accuracy: 0.0000e+00 - val_loss: 0.0150 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 342ms/epoch - 6ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0069 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 316ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0038 - accuracy: 0.0000e+00 - val_loss: 0.0096 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 325ms/epoch - 6ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0093 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 333ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00012: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0023 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 314ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step Epoch 17/500 Epoch 00017: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00017: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 325ms/epoch - 6ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0074 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 321ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 341ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 330ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 347ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 343ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 337ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 342ms/epoch - 6ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 357ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 350ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 345ms/epoch - 6ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 320ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0055 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 421ms/epoch - 7ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 362ms/epoch - 6ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 331ms/epoch - 6ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00457 58/58 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0052 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 368ms/epoch - 6ms/step Epoch 00052: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.28436187942754
RMSE: 8.383576914386099
MAPE: 6.876111393338704
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.57086226636761
RMSE: 8.400646538592587
MAPE: 6.6664001460728475
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 329.6035699397079
RMSE: 18.15498746735199
MAPE: 16.799244301034683
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 103.27437965196852
RMSE: 10.162400289890599
MAPE: 8.510636158449836
MIDPOINT
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 97.31838139819504
RMSE: 9.86500792692003
MAPE: 8.251875922025462
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16837.838, Time=3.68 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14497.319, Time=3.98 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16084.348, Time=6.86 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.920, Time=12.01 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15304.480, Time=11.66 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15949.053, Time=12.77 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17059.707, Time=12.46 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15313.920, Time=14.52 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16054.952, Time=13.57 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11445.350, Time=35.04 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 126.548 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8561.853
Date: Sun, 12 Dec 2021 AIC -17059.707
Time: 17:55:46 BIC -16909.600
Sample: 0 HQIC -17002.059
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.003e-07 7.69e-05 -0.001 0.999 -0.000 0.000
x2 -1.001e-07 7.44e-05 -0.001 0.999 -0.000 0.000
x3 -1.006e-07 7.84e-05 -0.001 0.999 -0.000 0.000
x4 1.0000 7.11e-05 1.41e+04 0.000 1.000 1.000
x5 -9.611e-08 6.77e-05 -0.001 0.999 -0.000 0.000
x6 -1.249e-07 4.06e-05 -0.003 0.998 -7.96e-05 7.94e-05
x7 -1e-07 7.89e-05 -0.001 0.999 -0.000 0.000
x8 -0.0002 9.43e-05 -1.838 0.066 -0.000 1.15e-05
x9 2.853e-08 9.89e-05 0.000 1.000 -0.000 0.000
x10 -4.022e-05 0.000 -0.200 0.842 -0.000 0.000
x11 0.0003 7e-05 4.122 0.000 0.000 0.000
x12 7.55e-05 0.000 0.633 0.527 -0.000 0.000
x13 -1.005e-07 7.29e-05 -0.001 0.999 -0.000 0.000
x14 -2.756e-07 0.000 -0.001 0.999 -0.000 0.000
x15 -8.419e-08 8.98e-05 -0.001 0.999 -0.000 0.000
x16 -2.171e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.105e-07 9.93e-05 -0.001 0.999 -0.000 0.000
x18 1.263e-07 3.22e-05 0.004 0.997 -6.31e-05 6.33e-05
x19 -8.769e-08 0.000 -0.001 0.999 -0.000 0.000
x20 -5.772e-08 0.000 -0.000 1.000 -0.000 0.000
x21 -9.77e-08 0.000 -0.001 1.000 -0.000 0.000
x22 -3.686e-12 7.09e-07 -5.2e-06 1.000 -1.39e-06 1.39e-06
x23 -9.216e-12 2.4e-05 -3.83e-07 1.000 -4.71e-05 4.71e-05
x24 -3.648e-07 0.000 -0.001 0.999 -0.001 0.001
x25 -1.391e-07 0.001 -0.000 1.000 -0.002 0.002
x26 -3.142e-07 0.000 -0.001 0.999 -0.001 0.001
x27 -3.042e-07 5.47e-05 -0.006 0.996 -0.000 0.000
x28 -1.785e-07 0.000 -0.001 0.999 -0.000 0.000
x29 -1.909e-07 0.000 -0.001 1.000 -0.001 0.001
ma.L1 -1.3901 8.24e-06 -1.69e+05 0.000 -1.390 -1.390
ma.L2 0.4035 2.01e-05 2.01e+04 0.000 0.403 0.404
sigma2 7.538e-11 6.94e-11 1.085 0.278 -6.07e-11 2.11e-10
===================================================================================
Ljung-Box (L1) (Q): 69.36 Jarque-Bera (JB): 6470073.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -12.55
Prob(H) (two-sided): 0.00 Kurtosis: 441.48
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.58e+22. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.02365, saving model to LSTM6.h5 43/43 - 4s - loss: 0.1736 - accuracy: 0.0000e+00 - val_loss: 0.0236 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 87ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.02365 to 0.01039, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0498 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 0.0010 - 297ms/epoch - 7ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0484 - accuracy: 0.0000e+00 - val_loss: 0.0776 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 273ms/epoch - 6ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0526 - accuracy: 0.0000e+00 - val_loss: 0.0138 - val_accuracy: 0.0037 - lr: 0.0010 - 299ms/epoch - 7ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0314 - accuracy: 0.0000e+00 - val_loss: 0.2012 - val_accuracy: 0.0000e+00 - lr: 0.0010 - 241ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0257 - accuracy: 0.0000e+00 - val_loss: 0.0410 - val_accuracy: 0.0037 - lr: 0.0010 - 272ms/epoch - 6ms/step Epoch 7/500 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00007: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0164 - accuracy: 0.0000e+00 - val_loss: 0.0972 - val_accuracy: 0.0037 - lr: 0.0010 - 284ms/epoch - 7ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0320 - accuracy: 0.0000e+00 - val_loss: 0.0406 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 240ms/epoch - 6ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0042 - accuracy: 0.0000e+00 - val_loss: 0.0253 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 235ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0029 - accuracy: 0.0000e+00 - val_loss: 0.0174 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 304ms/epoch - 7ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.01039 43/43 - 0s - loss: 0.0024 - accuracy: 0.0000e+00 - val_loss: 0.0119 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 251ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: val_loss improved from 0.01039 to 0.00836, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0020 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 295ms/epoch - 7ms/step Epoch 13/500 Epoch 00013: val_loss improved from 0.00836 to 0.00619, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0018 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 268ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: val_loss improved from 0.00619 to 0.00493, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0016 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 308ms/epoch - 7ms/step Epoch 15/500 Epoch 00015: val_loss improved from 0.00493 to 0.00425, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0042 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 294ms/epoch - 7ms/step Epoch 16/500 Epoch 00016: val_loss improved from 0.00425 to 0.00391, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 266ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss improved from 0.00391 to 0.00377, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0014 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 284ms/epoch - 7ms/step Epoch 18/500 Epoch 00018: val_loss improved from 0.00377 to 0.00372, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 279ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 229ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 276ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 276ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00022: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 262ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00027: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 7ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00372 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 42/500 Epoch 00042: val_loss improved from 0.00372 to 0.00372, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss improved from 0.00372 to 0.00372, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 7ms/step Epoch 44/500 Epoch 00044: val_loss improved from 0.00372 to 0.00371, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step Epoch 45/500 Epoch 00045: val_loss improved from 0.00371 to 0.00371, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss improved from 0.00371 to 0.00371, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss improved from 0.00371 to 0.00371, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 304ms/epoch - 7ms/step Epoch 48/500 Epoch 00048: val_loss improved from 0.00371 to 0.00370, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss improved from 0.00370 to 0.00370, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 292ms/epoch - 7ms/step Epoch 50/500 Epoch 00050: val_loss improved from 0.00370 to 0.00370, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 323ms/epoch - 8ms/step Epoch 51/500 Epoch 00051: val_loss improved from 0.00370 to 0.00370, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 7ms/step Epoch 52/500 Epoch 00052: val_loss improved from 0.00370 to 0.00369, saving model to LSTM6.h5 43/43 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step Epoch 53/500 Epoch 00053: val_loss improved from 0.00369 to 0.00369, saving model to LSTM6.h5 43/43 - 0s - loss: 9.9916e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step Epoch 54/500 Epoch 00054: val_loss improved from 0.00369 to 0.00369, saving model to LSTM6.h5 43/43 - 0s - loss: 9.9685e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step Epoch 55/500 Epoch 00055: val_loss improved from 0.00369 to 0.00368, saving model to LSTM6.h5 43/43 - 0s - loss: 9.9454e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step Epoch 56/500 Epoch 00056: val_loss improved from 0.00368 to 0.00368, saving model to LSTM6.h5 43/43 - 0s - loss: 9.9221e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 289ms/epoch - 7ms/step Epoch 57/500 Epoch 00057: val_loss improved from 0.00368 to 0.00368, saving model to LSTM6.h5 43/43 - 0s - loss: 9.8988e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step Epoch 58/500 Epoch 00058: val_loss improved from 0.00368 to 0.00367, saving model to LSTM6.h5 43/43 - 0s - loss: 9.8754e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 59/500 Epoch 00059: val_loss improved from 0.00367 to 0.00367, saving model to LSTM6.h5 43/43 - 0s - loss: 9.8519e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 60/500 Epoch 00060: val_loss improved from 0.00367 to 0.00367, saving model to LSTM6.h5 43/43 - 0s - loss: 9.8283e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step Epoch 61/500 Epoch 00061: val_loss improved from 0.00367 to 0.00366, saving model to LSTM6.h5 43/43 - 0s - loss: 9.8046e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step Epoch 62/500 Epoch 00062: val_loss improved from 0.00366 to 0.00366, saving model to LSTM6.h5 43/43 - 0s - loss: 9.7808e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 63/500 Epoch 00063: val_loss improved from 0.00366 to 0.00366, saving model to LSTM6.h5 43/43 - 0s - loss: 9.7570e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step Epoch 64/500 Epoch 00064: val_loss improved from 0.00366 to 0.00365, saving model to LSTM6.h5 43/43 - 0s - loss: 9.7330e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 295ms/epoch - 7ms/step Epoch 65/500 Epoch 00065: val_loss improved from 0.00365 to 0.00365, saving model to LSTM6.h5 43/43 - 0s - loss: 9.7090e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step Epoch 66/500 Epoch 00066: val_loss improved from 0.00365 to 0.00365, saving model to LSTM6.h5 43/43 - 0s - loss: 9.6849e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step Epoch 67/500 Epoch 00067: val_loss improved from 0.00365 to 0.00364, saving model to LSTM6.h5 43/43 - 0s - loss: 9.6607e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step Epoch 68/500 Epoch 00068: val_loss improved from 0.00364 to 0.00364, saving model to LSTM6.h5 43/43 - 0s - loss: 9.6363e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 293ms/epoch - 7ms/step Epoch 69/500 Epoch 00069: val_loss improved from 0.00364 to 0.00363, saving model to LSTM6.h5 43/43 - 0s - loss: 9.6119e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step Epoch 70/500 Epoch 00070: val_loss improved from 0.00363 to 0.00363, saving model to LSTM6.h5 43/43 - 0s - loss: 9.5874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 7ms/step Epoch 71/500 Epoch 00071: val_loss improved from 0.00363 to 0.00362, saving model to LSTM6.h5 43/43 - 0s - loss: 9.5628e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step Epoch 72/500 Epoch 00072: val_loss improved from 0.00362 to 0.00362, saving model to LSTM6.h5 43/43 - 0s - loss: 9.5381e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step Epoch 73/500 Epoch 00073: val_loss improved from 0.00362 to 0.00361, saving model to LSTM6.h5 43/43 - 0s - loss: 9.5133e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 74/500 Epoch 00074: val_loss improved from 0.00361 to 0.00361, saving model to LSTM6.h5 43/43 - 0s - loss: 9.4884e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step Epoch 75/500 Epoch 00075: val_loss improved from 0.00361 to 0.00360, saving model to LSTM6.h5 43/43 - 0s - loss: 9.4634e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 76/500 Epoch 00076: val_loss improved from 0.00360 to 0.00360, saving model to LSTM6.h5 43/43 - 0s - loss: 9.4382e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step Epoch 77/500 Epoch 00077: val_loss improved from 0.00360 to 0.00359, saving model to LSTM6.h5 43/43 - 0s - loss: 9.4130e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 78/500 Epoch 00078: val_loss improved from 0.00359 to 0.00358, saving model to LSTM6.h5 43/43 - 0s - loss: 9.3877e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step Epoch 79/500 Epoch 00079: val_loss improved from 0.00358 to 0.00358, saving model to LSTM6.h5 43/43 - 0s - loss: 9.3623e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 7ms/step Epoch 80/500 Epoch 00080: val_loss improved from 0.00358 to 0.00357, saving model to LSTM6.h5 43/43 - 0s - loss: 9.3368e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step Epoch 81/500 Epoch 00081: val_loss improved from 0.00357 to 0.00356, saving model to LSTM6.h5 43/43 - 0s - loss: 9.3112e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step Epoch 82/500 Epoch 00082: val_loss improved from 0.00356 to 0.00356, saving model to LSTM6.h5 43/43 - 0s - loss: 9.2855e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step Epoch 83/500 Epoch 00083: val_loss improved from 0.00356 to 0.00355, saving model to LSTM6.h5 43/43 - 0s - loss: 9.2597e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 84/500 Epoch 00084: val_loss improved from 0.00355 to 0.00354, saving model to LSTM6.h5 43/43 - 0s - loss: 9.2338e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step Epoch 85/500 Epoch 00085: val_loss improved from 0.00354 to 0.00353, saving model to LSTM6.h5 43/43 - 0s - loss: 9.2078e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step Epoch 86/500 Epoch 00086: val_loss improved from 0.00353 to 0.00353, saving model to LSTM6.h5 43/43 - 0s - loss: 9.1817e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step Epoch 87/500 Epoch 00087: val_loss improved from 0.00353 to 0.00352, saving model to LSTM6.h5 43/43 - 0s - loss: 9.1556e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 309ms/epoch - 7ms/step Epoch 88/500 Epoch 00088: val_loss improved from 0.00352 to 0.00351, saving model to LSTM6.h5 43/43 - 0s - loss: 9.1294e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 299ms/epoch - 7ms/step Epoch 89/500 Epoch 00089: val_loss improved from 0.00351 to 0.00350, saving model to LSTM6.h5 43/43 - 0s - loss: 9.1031e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 312ms/epoch - 7ms/step Epoch 90/500 Epoch 00090: val_loss improved from 0.00350 to 0.00349, saving model to LSTM6.h5 43/43 - 0s - loss: 9.0767e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step Epoch 91/500 Epoch 00091: val_loss improved from 0.00349 to 0.00348, saving model to LSTM6.h5 43/43 - 0s - loss: 9.0503e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step Epoch 92/500 Epoch 00092: val_loss improved from 0.00348 to 0.00347, saving model to LSTM6.h5 43/43 - 0s - loss: 9.0238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 315ms/epoch - 7ms/step Epoch 93/500 Epoch 00093: val_loss improved from 0.00347 to 0.00346, saving model to LSTM6.h5 43/43 - 0s - loss: 8.9973e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step Epoch 94/500 Epoch 00094: val_loss improved from 0.00346 to 0.00346, saving model to LSTM6.h5 43/43 - 0s - loss: 8.9707e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 95/500 Epoch 00095: val_loss improved from 0.00346 to 0.00345, saving model to LSTM6.h5 43/43 - 0s - loss: 8.9441e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 308ms/epoch - 7ms/step Epoch 96/500 Epoch 00096: val_loss improved from 0.00345 to 0.00344, saving model to LSTM6.h5 43/43 - 0s - loss: 8.9174e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 97/500 Epoch 00097: val_loss improved from 0.00344 to 0.00343, saving model to LSTM6.h5 43/43 - 0s - loss: 8.8907e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step Epoch 98/500 Epoch 00098: val_loss improved from 0.00343 to 0.00342, saving model to LSTM6.h5 43/43 - 0s - loss: 8.8640e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step Epoch 99/500 Epoch 00099: val_loss improved from 0.00342 to 0.00340, saving model to LSTM6.h5 43/43 - 0s - loss: 8.8373e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 281ms/epoch - 7ms/step Epoch 100/500 Epoch 00100: val_loss improved from 0.00340 to 0.00339, saving model to LSTM6.h5 43/43 - 0s - loss: 8.8106e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 101/500 Epoch 00101: val_loss improved from 0.00339 to 0.00338, saving model to LSTM6.h5 43/43 - 0s - loss: 8.7839e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step Epoch 102/500 Epoch 00102: val_loss improved from 0.00338 to 0.00337, saving model to LSTM6.h5 43/43 - 0s - loss: 8.7571e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step Epoch 103/500 Epoch 00103: val_loss improved from 0.00337 to 0.00336, saving model to LSTM6.h5 43/43 - 0s - loss: 8.7304e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step Epoch 104/500 Epoch 00104: val_loss improved from 0.00336 to 0.00335, saving model to LSTM6.h5 43/43 - 0s - loss: 8.7037e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 275ms/epoch - 6ms/step Epoch 105/500 Epoch 00105: val_loss improved from 0.00335 to 0.00334, saving model to LSTM6.h5 43/43 - 0s - loss: 8.6770e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 290ms/epoch - 7ms/step Epoch 106/500 Epoch 00106: val_loss improved from 0.00334 to 0.00333, saving model to LSTM6.h5 43/43 - 0s - loss: 8.6504e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step Epoch 107/500 Epoch 00107: val_loss improved from 0.00333 to 0.00332, saving model to LSTM6.h5 43/43 - 0s - loss: 8.6238e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 108/500 Epoch 00108: val_loss improved from 0.00332 to 0.00331, saving model to LSTM6.h5 43/43 - 0s - loss: 8.5973e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 109/500 Epoch 00109: val_loss improved from 0.00331 to 0.00330, saving model to LSTM6.h5 43/43 - 0s - loss: 8.5708e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 305ms/epoch - 7ms/step Epoch 110/500 Epoch 00110: val_loss improved from 0.00330 to 0.00329, saving model to LSTM6.h5 43/43 - 0s - loss: 8.5443e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step Epoch 111/500 Epoch 00111: val_loss improved from 0.00329 to 0.00327, saving model to LSTM6.h5 43/43 - 0s - loss: 8.5180e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step Epoch 112/500 Epoch 00112: val_loss improved from 0.00327 to 0.00326, saving model to LSTM6.h5 43/43 - 0s - loss: 8.4917e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 113/500 Epoch 00113: val_loss improved from 0.00326 to 0.00325, saving model to LSTM6.h5 43/43 - 0s - loss: 8.4655e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 318ms/epoch - 7ms/step Epoch 114/500 Epoch 00114: val_loss improved from 0.00325 to 0.00324, saving model to LSTM6.h5 43/43 - 0s - loss: 8.4393e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step Epoch 115/500 Epoch 00115: val_loss improved from 0.00324 to 0.00323, saving model to LSTM6.h5 43/43 - 0s - loss: 8.4133e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 116/500 Epoch 00116: val_loss improved from 0.00323 to 0.00322, saving model to LSTM6.h5 43/43 - 0s - loss: 8.3874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step Epoch 117/500 Epoch 00117: val_loss improved from 0.00322 to 0.00321, saving model to LSTM6.h5 43/43 - 0s - loss: 8.3616e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step Epoch 118/500 Epoch 00118: val_loss improved from 0.00321 to 0.00320, saving model to LSTM6.h5 43/43 - 0s - loss: 8.3359e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step Epoch 119/500 Epoch 00119: val_loss improved from 0.00320 to 0.00319, saving model to LSTM6.h5 43/43 - 0s - loss: 8.3103e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step Epoch 120/500 Epoch 00120: val_loss improved from 0.00319 to 0.00319, saving model to LSTM6.h5 43/43 - 0s - loss: 8.2848e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 260ms/epoch - 6ms/step Epoch 121/500 Epoch 00121: val_loss improved from 0.00319 to 0.00318, saving model to LSTM6.h5 43/43 - 0s - loss: 8.2595e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step Epoch 122/500 Epoch 00122: val_loss improved from 0.00318 to 0.00317, saving model to LSTM6.h5 43/43 - 0s - loss: 8.2343e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step Epoch 123/500 Epoch 00123: val_loss improved from 0.00317 to 0.00316, saving model to LSTM6.h5 43/43 - 0s - loss: 8.2093e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step Epoch 124/500 Epoch 00124: val_loss improved from 0.00316 to 0.00315, saving model to LSTM6.h5 43/43 - 0s - loss: 8.1844e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 279ms/epoch - 6ms/step Epoch 125/500 Epoch 00125: val_loss improved from 0.00315 to 0.00315, saving model to LSTM6.h5 43/43 - 0s - loss: 8.1596e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 300ms/epoch - 7ms/step Epoch 126/500 Epoch 00126: val_loss improved from 0.00315 to 0.00314, saving model to LSTM6.h5 43/43 - 0s - loss: 8.1350e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step Epoch 127/500 Epoch 00127: val_loss improved from 0.00314 to 0.00313, saving model to LSTM6.h5 43/43 - 0s - loss: 8.1106e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 317ms/epoch - 7ms/step Epoch 128/500 Epoch 00128: val_loss improved from 0.00313 to 0.00313, saving model to LSTM6.h5 43/43 - 0s - loss: 8.0864e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step Epoch 129/500 Epoch 00129: val_loss improved from 0.00313 to 0.00312, saving model to LSTM6.h5 43/43 - 0s - loss: 8.0623e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step Epoch 130/500 Epoch 00130: val_loss improved from 0.00312 to 0.00312, saving model to LSTM6.h5 43/43 - 0s - loss: 8.0383e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 286ms/epoch - 7ms/step Epoch 131/500 Epoch 00131: val_loss improved from 0.00312 to 0.00312, saving model to LSTM6.h5 43/43 - 0s - loss: 8.0146e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 285ms/epoch - 7ms/step Epoch 132/500 Epoch 00132: val_loss improved from 0.00312 to 0.00311, saving model to LSTM6.h5 43/43 - 0s - loss: 7.9910e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 133/500 Epoch 00133: val_loss improved from 0.00311 to 0.00311, saving model to LSTM6.h5 43/43 - 0s - loss: 7.9676e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 284ms/epoch - 7ms/step Epoch 134/500 Epoch 00134: val_loss improved from 0.00311 to 0.00311, saving model to LSTM6.h5 43/43 - 0s - loss: 7.9444e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step Epoch 135/500 Epoch 00135: val_loss improved from 0.00311 to 0.00311, saving model to LSTM6.h5 43/43 - 0s - loss: 7.9214e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 136/500 Epoch 00136: val_loss improved from 0.00311 to 0.00311, saving model to LSTM6.h5 43/43 - 0s - loss: 7.8986e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step Epoch 137/500 Epoch 00137: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.8760e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step Epoch 138/500 Epoch 00138: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.8535e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 139/500 Epoch 00139: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.8313e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step Epoch 140/500 Epoch 00140: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.8092e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step Epoch 141/500 Epoch 00141: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.7874e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 291ms/epoch - 7ms/step Epoch 142/500 Epoch 00142: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.7657e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 143/500 Epoch 00143: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.7443e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step Epoch 144/500 Epoch 00144: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.7230e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 302ms/epoch - 7ms/step Epoch 145/500 Epoch 00145: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.7020e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step Epoch 146/500 Epoch 00146: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.6811e-04 - accuracy: 0.0000e+00 - val_loss: 0.0031 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step Epoch 147/500 Epoch 00147: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.6605e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 148/500 Epoch 00148: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.6400e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 149/500 Epoch 00149: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.6198e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 255ms/epoch - 6ms/step Epoch 150/500 Epoch 00150: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.5997e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step Epoch 151/500 Epoch 00151: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.5799e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step Epoch 152/500 Epoch 00152: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.5602e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 276ms/epoch - 6ms/step Epoch 153/500 Epoch 00153: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.5408e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step Epoch 154/500 Epoch 00154: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.5215e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 155/500 Epoch 00155: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.5024e-04 - accuracy: 0.0000e+00 - val_loss: 0.0032 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step Epoch 156/500 Epoch 00156: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.4836e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step Epoch 157/500 Epoch 00157: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.4649e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 158/500 Epoch 00158: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.4464e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 159/500 Epoch 00159: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.4280e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step Epoch 160/500 Epoch 00160: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.4099e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 282ms/epoch - 7ms/step Epoch 161/500 Epoch 00161: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.3919e-04 - accuracy: 0.0000e+00 - val_loss: 0.0033 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step Epoch 162/500 Epoch 00162: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.3741e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 163/500 Epoch 00163: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.3565e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 283ms/epoch - 7ms/step Epoch 164/500 Epoch 00164: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.3391e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 258ms/epoch - 6ms/step Epoch 165/500 Epoch 00165: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.3218e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step Epoch 166/500 Epoch 00166: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.3047e-04 - accuracy: 0.0000e+00 - val_loss: 0.0034 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step Epoch 167/500 Epoch 00167: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.2878e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 168/500 Epoch 00168: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.2710e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step Epoch 169/500 Epoch 00169: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.2543e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step Epoch 170/500 Epoch 00170: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.2379e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step Epoch 171/500 Epoch 00171: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.2215e-04 - accuracy: 0.0000e+00 - val_loss: 0.0035 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step Epoch 172/500 Epoch 00172: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.2054e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 173/500 Epoch 00173: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.1893e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step Epoch 174/500 Epoch 00174: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.1734e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 175/500 Epoch 00175: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.1577e-04 - accuracy: 0.0000e+00 - val_loss: 0.0036 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step Epoch 176/500 Epoch 00176: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.1421e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 177/500 Epoch 00177: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.1266e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step Epoch 178/500 Epoch 00178: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.1112e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 280ms/epoch - 7ms/step Epoch 179/500 Epoch 00179: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0960e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 294ms/epoch - 7ms/step Epoch 180/500 Epoch 00180: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0037 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step Epoch 181/500 Epoch 00181: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0659e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step Epoch 182/500 Epoch 00182: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0510e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step Epoch 183/500 Epoch 00183: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0363e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 184/500 Epoch 00184: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0216e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 185/500 Epoch 00185: val_loss did not improve from 0.00311 43/43 - 0s - loss: 7.0071e-04 - accuracy: 0.0000e+00 - val_loss: 0.0038 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 186/500 Epoch 00186: val_loss did not improve from 0.00311 43/43 - 0s - loss: 6.9927e-04 - accuracy: 0.0000e+00 - val_loss: 0.0039 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 303ms/epoch - 7ms/step Epoch 00186: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.51% Accuracy
MSE: 75.03401716737034
RMSE: 8.662217797271685
MAPE: 7.077228582293258
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.28436187942754
RMSE: 8.383576914386099
MAPE: 6.876111393338704
WMA
Prediction vs Close: 54.85% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 70.57086226636761
RMSE: 8.400646538592587
MAPE: 6.6664001460728475
DEMA
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 329.6035699397079
RMSE: 18.15498746735199
MAPE: 16.799244301034683
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 103.27437965196852
RMSE: 10.162400289890599
MAPE: 8.510636158449836
MIDPOINT
Prediction vs Close: 51.87% Accuracy
Prediction vs Prediction: 45.15% Accuracy
MSE: 97.31838139819504
RMSE: 9.86500792692003
MAPE: 8.251875922025462
T3
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 46.27% Accuracy
MSE: 154.17386959926716
RMSE: 12.41667707558134
MAPE: 10.12780101556255
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16736.686, Time=3.84 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-15327.143, Time=3.43 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15166.078, Time=7.53 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14962.662, Time=14.44 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16731.606, Time=6.04 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14848.952, Time=10.31 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16921.745, Time=6.21 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14958.662, Time=18.13 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15003.046, Time=13.56 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16752.122, Time=3.90 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 87.410 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8492.873
Date: Sun, 12 Dec 2021 AIC -16921.745
Time: 18:02:15 BIC -16771.638
Sample: 0 HQIC -16864.098
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.277e-08 0.001 3.25e-05 1.000 -0.001 0.001
x2 2.286e-08 0.001 2.5e-05 1.000 -0.002 0.002
x3 2.286e-08 0.001 3.44e-05 1.000 -0.001 0.001
x4 1.0000 0.000 3190.279 0.000 0.999 1.001
x5 2.174e-08 0.001 4.21e-05 1.000 -0.001 0.001
x6 6.124e-09 3.05e-05 0.000 1.000 -5.97e-05 5.97e-05
x7 2.246e-08 0.001 1.67e-05 1.000 -0.003 0.003
x8 -0.0013 0.001 -1.669 0.095 -0.003 0.000
x9 -5.239e-09 0.000 -1.79e-05 1.000 -0.001 0.001
x10 0.0001 9.9e-05 1.396 0.163 -5.59e-05 0.000
x11 -0.0001 0.001 -0.177 0.859 -0.002 0.001
x12 0.0012 0.001 1.426 0.154 -0.000 0.003
x13 2.284e-08 0.000 6.75e-05 1.000 -0.001 0.001
x14 6.258e-08 0.001 5.07e-05 1.000 -0.002 0.002
x15 2.215e-08 0.000 0.000 1.000 -0.000 0.000
x16 3.243e-08 0.000 0.000 1.000 -0.001 0.001
x17 2.22e-08 0.000 0.000 1.000 -0.000 0.000
x18 7.527e-09 0.000 1.67e-05 1.000 -0.001 0.001
x19 2.477e-08 0.000 0.000 1.000 -0.000 0.000
x20 -2.348e-08 0.000 -5.78e-05 1.000 -0.001 0.001
x21 2.718e-08 5.8e-05 0.000 1.000 -0.000 0.000
x22 -2.176e-10 0.000 -5.27e-07 1.000 -0.001 0.001
x23 -2.69e-09 8.49e-05 -3.17e-05 1.000 -0.000 0.000
x24 -4.516e-08 7.24e-06 -0.006 0.995 -1.42e-05 1.41e-05
x25 -4.213e-08 2.81e-05 -0.002 0.999 -5.51e-05 5.5e-05
x26 7.946e-08 0.001 0.000 1.000 -0.001 0.001
x27 4.528e-08 0.001 6.21e-05 1.000 -0.001 0.001
x28 5.92e-08 0.001 4.12e-05 1.000 -0.003 0.003
x29 3.468e-08 0.000 7.06e-05 1.000 -0.001 0.001
ma.L1 -1.3739 4.46e-06 -3.08e+05 0.000 -1.374 -1.374
ma.L2 0.3968 1.4e-05 2.84e+04 0.000 0.397 0.397
sigma2 7.701e-11 7.39e-11 1.043 0.297 -6.78e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 61.47 Jarque-Bera (JB): 5565463.09
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 10.97
Prob(H) (two-sided): 0.00 Kurtosis: 409.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.67e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs)
Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.09728, saving model to LSTM6.h5 90/90 - 4s - loss: 0.1306 - accuracy: 0.0000e+00 - val_loss: 0.0973 - val_accuracy: 0.0037 - lr: 0.0010 - 4s/epoch - 48ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.09728 to 0.01880, saving model to LSTM6.h5 90/90 - 1s - loss: 0.1590 - accuracy: 0.0000e+00 - val_loss: 0.0188 - val_accuracy: 0.0037 - lr: 0.0010 - 513ms/epoch - 6ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.01880 90/90 - 1s - loss: 0.0262 - accuracy: 0.0000e+00 - val_loss: 0.0237 - val_accuracy: 0.0037 - lr: 0.0010 - 514ms/epoch - 6ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.01880 90/90 - 0s - loss: 0.0257 - accuracy: 0.0000e+00 - val_loss: 0.0481 - val_accuracy: 0.0037 - lr: 0.0010 - 479ms/epoch - 5ms/step Epoch 5/500 Epoch 00005: val_loss improved from 0.01880 to 0.01832, saving model to LSTM6.h5 90/90 - 1s - loss: 0.0182 - accuracy: 0.0000e+00 - val_loss: 0.0183 - val_accuracy: 0.0037 - lr: 0.0010 - 566ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.01832 90/90 - 0s - loss: 0.0211 - accuracy: 0.0000e+00 - val_loss: 0.0198 - val_accuracy: 0.0037 - lr: 0.0010 - 484ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: val_loss improved from 0.01832 to 0.00854, saving model to LSTM6.h5 90/90 - 1s - loss: 0.0149 - accuracy: 0.0000e+00 - val_loss: 0.0085 - val_accuracy: 0.0037 - lr: 0.0010 - 535ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.00854 90/90 - 0s - loss: 0.0148 - accuracy: 0.0000e+00 - val_loss: 0.0155 - val_accuracy: 0.0037 - lr: 0.0010 - 468ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.00854 90/90 - 1s - loss: 0.0132 - accuracy: 0.0000e+00 - val_loss: 0.0116 - val_accuracy: 0.0037 - lr: 0.0010 - 578ms/epoch - 6ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.00854 90/90 - 0s - loss: 0.0128 - accuracy: 0.0000e+00 - val_loss: 0.0147 - val_accuracy: 0.0037 - lr: 0.0010 - 460ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.00854 90/90 - 1s - loss: 0.0124 - accuracy: 0.0000e+00 - val_loss: 0.0126 - val_accuracy: 0.0037 - lr: 0.0010 - 532ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00012: val_loss did not improve from 0.00854 90/90 - 1s - loss: 0.0122 - accuracy: 0.0000e+00 - val_loss: 0.0144 - val_accuracy: 0.0037 - lr: 0.0010 - 529ms/epoch - 6ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.00854 90/90 - 1s - loss: 0.0208 - accuracy: 0.0000e+00 - val_loss: 0.0251 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 510ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.00854 90/90 - 0s - loss: 0.0129 - accuracy: 0.0000e+00 - val_loss: 0.0136 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 479ms/epoch - 5ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.00854 90/90 - 1s - loss: 0.0091 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 525ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: val_loss improved from 0.00854 to 0.00725, saving model to LSTM6.h5 90/90 - 1s - loss: 0.0070 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 577ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss improved from 0.00725 to 0.00605, saving model to LSTM6.h5 90/90 - 0s - loss: 0.0055 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 496ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss improved from 0.00605 to 0.00532, saving model to LSTM6.h5 90/90 - 1s - loss: 0.0044 - accuracy: 0.0000e+00 - val_loss: 0.0053 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 516ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss improved from 0.00532 to 0.00489, saving model to LSTM6.h5 90/90 - 1s - loss: 0.0036 - accuracy: 0.0000e+00 - val_loss: 0.0049 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 565ms/epoch - 6ms/step Epoch 20/500 Epoch 00020: val_loss improved from 0.00489 to 0.00464, saving model to LSTM6.h5 90/90 - 1s - loss: 0.0030 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 515ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss improved from 0.00464 to 0.00456, saving model to LSTM6.h5 90/90 - 0s - loss: 0.0025 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 462ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0022 - accuracy: 0.0000e+00 - val_loss: 0.0046 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 486ms/epoch - 5ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0019 - accuracy: 0.0000e+00 - val_loss: 0.0048 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 454ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0017 - accuracy: 0.0000e+00 - val_loss: 0.0050 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 468ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00025: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0015 - accuracy: 0.0000e+00 - val_loss: 0.0054 - val_accuracy: 0.0037 - lr: 1.0000e-04 - 467ms/epoch - 5ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0013 - accuracy: 0.0000e+00 - val_loss: 0.0056 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 545ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0057 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 484ms/epoch - 5ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0058 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 462ms/epoch - 5ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0059 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 495ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00030: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0060 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 510ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0061 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 552ms/epoch - 6ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0062 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 510ms/epoch - 6ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0063 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 493ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0064 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 503ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0012 - accuracy: 0.0000e+00 - val_loss: 0.0065 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 477ms/epoch - 5ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0066 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 546ms/epoch - 6ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0067 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0068 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 532ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0069 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 478ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0070 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 493ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0071 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 456ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0072 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 504ms/epoch - 6ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0073 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 560ms/epoch - 6ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0075 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 592ms/epoch - 7ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0076 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 550ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0011 - accuracy: 0.0000e+00 - val_loss: 0.0077 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 607ms/epoch - 7ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0078 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 468ms/epoch - 5ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0079 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 480ms/epoch - 5ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0081 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 468ms/epoch - 5ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0082 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 498ms/epoch - 6ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.00456 90/90 - 1s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0083 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 552ms/epoch - 6ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.00456 90/90 - 0s - loss: 0.0010 - accuracy: 0.0000e+00 - val_loss: 0.0084 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 483ms/epoch - 5ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.9579e-04 - accuracy: 0.0000e+00 - val_loss: 0.0086 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.8898e-04 - accuracy: 0.0000e+00 - val_loss: 0.0087 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 550ms/epoch - 6ms/step Epoch 55/500 Epoch 00055: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.8242e-04 - accuracy: 0.0000e+00 - val_loss: 0.0088 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 470ms/epoch - 5ms/step Epoch 56/500 Epoch 00056: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.7611e-04 - accuracy: 0.0000e+00 - val_loss: 0.0090 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 460ms/epoch - 5ms/step Epoch 57/500 Epoch 00057: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.7005e-04 - accuracy: 0.0000e+00 - val_loss: 0.0091 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 540ms/epoch - 6ms/step Epoch 58/500 Epoch 00058: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.6423e-04 - accuracy: 0.0000e+00 - val_loss: 0.0092 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 493ms/epoch - 5ms/step Epoch 59/500 Epoch 00059: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.5863e-04 - accuracy: 0.0000e+00 - val_loss: 0.0094 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 466ms/epoch - 5ms/step Epoch 60/500 Epoch 00060: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.5326e-04 - accuracy: 0.0000e+00 - val_loss: 0.0095 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 476ms/epoch - 5ms/step Epoch 61/500 Epoch 00061: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.4809e-04 - accuracy: 0.0000e+00 - val_loss: 0.0097 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 479ms/epoch - 5ms/step Epoch 62/500 Epoch 00062: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.4313e-04 - accuracy: 0.0000e+00 - val_loss: 0.0098 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step Epoch 63/500 Epoch 00063: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.3834e-04 - accuracy: 0.0000e+00 - val_loss: 0.0100 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 545ms/epoch - 6ms/step Epoch 64/500 Epoch 00064: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.3372e-04 - accuracy: 0.0000e+00 - val_loss: 0.0101 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 484ms/epoch - 5ms/step Epoch 65/500 Epoch 00065: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.2925e-04 - accuracy: 0.0000e+00 - val_loss: 0.0102 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 571ms/epoch - 6ms/step Epoch 66/500 Epoch 00066: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.2492e-04 - accuracy: 0.0000e+00 - val_loss: 0.0104 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 464ms/epoch - 5ms/step Epoch 67/500 Epoch 00067: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.2071e-04 - accuracy: 0.0000e+00 - val_loss: 0.0105 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 535ms/epoch - 6ms/step Epoch 68/500 Epoch 00068: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.1660e-04 - accuracy: 0.0000e+00 - val_loss: 0.0107 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 483ms/epoch - 5ms/step Epoch 69/500 Epoch 00069: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.1259e-04 - accuracy: 0.0000e+00 - val_loss: 0.0108 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step Epoch 70/500 Epoch 00070: val_loss did not improve from 0.00456 90/90 - 1s - loss: 9.0865e-04 - accuracy: 0.0000e+00 - val_loss: 0.0110 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 539ms/epoch - 6ms/step Epoch 71/500 Epoch 00071: val_loss did not improve from 0.00456 90/90 - 0s - loss: 9.0479e-04 - accuracy: 0.0000e+00 - val_loss: 0.0111 - val_accuracy: 0.0037 - lr: 1.0000e-05 - 482ms/epoch - 5ms/step Epoch 00071: early stopping
SMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 75.03401716737034 RMSE: 8.662217797271685 MAPE: 7.077228582293258 EMA Prediction vs Close: 55.22% Accuracy Prediction vs Prediction: 45.15% Accuracy MSE: 70.28436187942754 RMSE: 8.383576914386099 MAPE: 6.876111393338704 WMA Prediction vs Close: 54.85% Accuracy Prediction vs Prediction: 45.15% Accuracy MSE: 70.57086226636761 RMSE: 8.400646538592587 MAPE: 6.6664001460728475 DEMA Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 44.4% Accuracy MSE: 329.6035699397079 RMSE: 18.15498746735199 MAPE: 16.799244301034683 KAMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 45.15% Accuracy MSE: 103.27437965196852 RMSE: 10.162400289890599 MAPE: 8.510636158449836 MIDPOINT Prediction vs Close: 51.87% Accuracy Prediction vs Prediction: 45.15% Accuracy MSE: 97.31838139819504 RMSE: 9.86500792692003 MAPE: 8.251875922025462 T3 Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 46.27% Accuracy MSE: 154.17386959926716 RMSE: 12.41667707558134 MAPE: 10.12780101556255 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 48.51% Accuracy MSE: 87.38626729945001 RMSE: 9.348062221629144 MAPE: 8.358174186376312 Runtime: mins: 58.01520771974999
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment6.png to Experiment6 (1).png
img = cv2.imread('Experiment6.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5cfcc6b10>
for i in range(len(list(simulation6.keys()))):
SIM = list(simulation6.keys())[i]
plot_train(simulation6,SIM)
plot_test(simulation6,SIM)
----- Train RMSE for SMA ----- 8.882238711112084 ----- Train_MSE_LSTM for SMA ----- 78.89416452117804 ----- Train MAE LSTM for SMA ----- 7.762640273567326
----- Test RMSE for SMA----- 8.662217797271685 ----- Test_MSE_LSTM for SMA----- 75.03401716737034 ----- Test_MAE_LSTM for SMA----- 7.077228582293258
----- Train RMSE for EMA ----- 10.160180455912146 ----- Train_MSE_LSTM for EMA ----- 103.22926689669913 ----- Train MAE LSTM for EMA ----- 9.005262034617845
----- Test RMSE for EMA----- 8.383576914386099 ----- Test_MSE_LSTM for EMA----- 70.28436187942754 ----- Test_MAE_LSTM for EMA----- 6.876111393338704
----- Train RMSE for WMA ----- 10.433402093205265 ----- Train_MSE_LSTM for WMA ----- 108.85587923850001 ----- Train MAE LSTM for WMA ----- 9.27337175169683
----- Test RMSE for WMA----- 8.400646538592587 ----- Test_MSE_LSTM for WMA----- 70.57086226636761 ----- Test_MAE_LSTM for WMA----- 6.6664001460728475
----- Train RMSE for DEMA ----- 12.041548116714113 ----- Train_MSE_LSTM for DEMA ----- 144.9988810471412 ----- Train MAE LSTM for DEMA ----- 10.79869721264182
----- Test RMSE for DEMA----- 18.15498746735199 ----- Test_MSE_LSTM for DEMA----- 329.6035699397079 ----- Test_MAE_LSTM for DEMA----- 16.799244301034683
----- Train RMSE for KAMA ----- 10.551998588436238 ----- Train_MSE_LSTM for KAMA ----- 111.34467421036034 ----- Train MAE LSTM for KAMA ----- 9.496739060524044
----- Test RMSE for KAMA----- 10.162400289890599 ----- Test_MSE_LSTM for KAMA----- 103.27437965196852 ----- Test_MAE_LSTM for KAMA----- 8.510636158449836
----- Train RMSE for MIDPOINT ----- 9.500193980286314 ----- Train_MSE_LSTM for MIDPOINT ----- 90.25368566306832 ----- Train MAE LSTM for MIDPOINT ----- 8.42568593891395
----- Test RMSE for MIDPOINT----- 9.86500792692003 ----- Test_MSE_LSTM for MIDPOINT----- 97.31838139819504 ----- Test_MAE_LSTM for MIDPOINT----- 8.251875922025462
----- Train RMSE for T3 ----- 12.076038211073818 ----- Train_MSE_LSTM for T3 ----- 145.83069887531494 ----- Train MAE LSTM for T3 ----- 10.880763551400108
----- Test RMSE for T3----- 12.41667707558134 ----- Test_MSE_LSTM for T3----- 154.17386959926716 ----- Test_MAE_LSTM for T3----- 10.12780101556255
----- Train RMSE for TEMA ----- 7.43933113042144 ----- Train_MSE_LSTM for TEMA ----- 55.343647668057535 ----- Train MAE LSTM for TEMA ----- 5.114719427400499
----- Test RMSE for TEMA----- 9.348062221629144 ----- Test_MSE_LSTM for TEMA----- 87.38626729945001 ----- Test_MAE_LSTM for TEMA----- 8.358174186376312
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det =20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma+' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
class Double_Tanh(Activation):
def __init__(self, activation, **kwargs):
super(Double_Tanh, self).__init__(activation, **kwargs)
self.__name__ = 'double_tanh'
def double_tanh(x):
return (K.tanh(x) * 2)
get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# Model Generation
model = Sequential()
#check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
model.add(Dense(1))
model.add(Activation(double_tanh))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Option 4
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(x_train.shape[1], 1)))
# model.add(LSTM(units=int(lstm_len/2)))
# model.add(Dense(1, activation='sigmoid'))
# model.compile(loss='mean_squared_error', optimizer='adam')
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation7 = {}
imgfile = 'Experiment7'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation7[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation7_data.json', 'w') as fp:
json.dump(simulation7, fp)
for ma in simulation7.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation7[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation7[ma]['final']['mse'],
'\nRMSE:\t', simulation7[ma]['final']['rmse'],
'\nMAPE:\t', simulation7[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.787, Time=3.58 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.588, Time=5.63 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14596.280, Time=5.74 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.588, Time=8.45 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16924.805, Time=10.66 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14482.349, Time=11.42 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17215.608, Time=20.46 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.588, Time=11.13 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15570.350, Time=19.72 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11671.292, Time=28.32 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 125.145 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8639.804
Date: Sun, 12 Dec 2021 AIC -17215.608
Time: 18:11:46 BIC -17065.501
Sample: 0 HQIC -17157.961
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.057e-09 5.82e-05 -6.97e-05 1.000 -0.000 0.000
x2 -4.057e-09 5.81e-05 -6.99e-05 1.000 -0.000 0.000
x3 -4.111e-09 5.49e-05 -7.49e-05 1.000 -0.000 0.000
x4 1.0000 5.71e-05 1.75e+04 0.000 1.000 1.000
x5 -3.706e-09 5.43e-05 -6.82e-05 1.000 -0.000 0.000
x6 -1.082e-08 0.000 -6.08e-05 1.000 -0.000 0.000
x7 -4.025e-09 5.63e-05 -7.15e-05 1.000 -0.000 0.000
x8 -4.035e-09 5.19e-05 -7.78e-05 1.000 -0.000 0.000
x9 -1.522e-10 2.9e-05 -5.25e-06 1.000 -5.68e-05 5.68e-05
x10 -6.396e-10 1.04e-05 -6.15e-05 1.000 -2.04e-05 2.04e-05
x11 -3.921e-09 5.06e-05 -7.75e-05 1.000 -9.91e-05 9.91e-05
x12 -4.102e-09 5.29e-05 -7.76e-05 1.000 -0.000 0.000
x13 -4.087e-09 5.75e-05 -7.11e-05 1.000 -0.000 0.000
x14 -3.619e-08 0.000 -0.000 1.000 -0.000 0.000
x15 -4.806e-09 4.61e-05 -0.000 1.000 -9.03e-05 9.03e-05
x16 -3.507e-09 0.000 -2.98e-05 1.000 -0.000 0.000
x17 -3.121e-09 6.02e-05 -5.18e-05 1.000 -0.000 0.000
x18 -1.172e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -5.433e-09 6.06e-05 -8.96e-05 1.000 -0.000 0.000
x20 -1.393e-08 4.79e-05 -0.000 1.000 -9.39e-05 9.39e-05
x21 -4.216e-09 6.63e-05 -6.36e-05 1.000 -0.000 0.000
x22 -3.479e-11 1.66e-08 -0.002 0.998 -3.25e-08 3.24e-08
x23 -9.221e-10 1.4e-07 -0.007 0.995 -2.74e-07 2.73e-07
x24 -8.085e-08 0.001 -6.96e-05 1.000 -0.002 0.002
x25 -9.642e-08 0.001 -0.000 1.000 -0.002 0.002
x26 -5.019e-08 0.000 -0.000 1.000 -0.000 0.000
x27 -2.457e-08 7.65e-05 -0.000 1.000 -0.000 0.000
x28 -3.411e-08 0.000 -0.000 1.000 -0.000 0.000
x29 -1.507e-08 4.36e-05 -0.000 1.000 -8.54e-05 8.54e-05
ma.L1 -1.3898 8.03e-07 -1.73e+06 0.000 -1.390 -1.390
ma.L2 0.4031 8.36e-07 4.82e+05 0.000 0.403 0.403
sigma2 7.528e-11 7.24e-11 1.040 0.298 -6.66e-11 2.17e-10
===================================================================================
Ljung-Box (L1) (Q): 89.12 Jarque-Bera (JB): 1533103.33
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.56
Prob(H) (two-sided): 0.00 Kurtosis: 216.50
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.08e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.32789, saving model to LSTM7.h5
48/48 - 2s - loss: 0.0877 - mse: 0.0877 - mae: 0.2289 - val_loss: 0.3279 - val_mse: 0.3279 - val_mae: 0.5464 - lr: 0.0010 - 2s/epoch - 44ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.32789 to 0.08109, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0258 - mse: 0.0258 - mae: 0.1276 - val_loss: 0.0811 - val_mse: 0.0811 - val_mae: 0.2555 - lr: 0.0010 - 243ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.08109
48/48 - 0s - loss: 0.0165 - mse: 0.0165 - mae: 0.1010 - val_loss: 0.0825 - val_mse: 0.0825 - val_mae: 0.2611 - lr: 0.0010 - 200ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.08109 to 0.06805, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0885 - val_loss: 0.0680 - val_mse: 0.0680 - val_mae: 0.2354 - lr: 0.0010 - 238ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.06805
48/48 - 0s - loss: 0.0099 - mse: 0.0099 - mae: 0.0771 - val_loss: 0.0897 - val_mse: 0.0897 - val_mae: 0.2766 - lr: 0.0010 - 208ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.06805 to 0.04796, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0810 - val_loss: 0.0480 - val_mse: 0.0480 - val_mae: 0.1935 - lr: 0.0010 - 232ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04796
48/48 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0693 - val_loss: 0.0706 - val_mse: 0.0706 - val_mae: 0.2443 - lr: 0.0010 - 252ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.04796 to 0.03656, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0701 - val_loss: 0.0366 - val_mse: 0.0366 - val_mae: 0.1669 - lr: 0.0010 - 286ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0662 - val_loss: 0.0638 - val_mse: 0.0638 - val_mae: 0.2318 - lr: 0.0010 - 261ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0694 - val_loss: 0.0548 - val_mse: 0.0548 - val_mae: 0.2136 - lr: 0.0010 - 265ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0632 - val_mse: 0.0632 - val_mae: 0.2304 - lr: 0.0010 - 240ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0685 - val_loss: 0.0450 - val_mse: 0.0450 - val_mae: 0.1907 - lr: 0.0010 - 227ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00013: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0769 - val_loss: 0.0858 - val_mse: 0.0858 - val_mae: 0.2735 - lr: 0.0010 - 210ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0259 - mse: 0.0259 - mae: 0.1392 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1834 - lr: 1.0000e-04 - 237ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0881 - val_loss: 0.0401 - val_mse: 0.0401 - val_mae: 0.1790 - lr: 1.0000e-04 - 244ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.03656
48/48 - 0s - loss: 0.0103 - mse: 0.0103 - mae: 0.0820 - val_loss: 0.0374 - val_mse: 0.0374 - val_mae: 0.1717 - lr: 1.0000e-04 - 212ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.03656 to 0.03439, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0733 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1635 - lr: 1.0000e-04 - 210ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss improved from 0.03439 to 0.03341, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0722 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1607 - lr: 1.0000e-04 - 251ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.03341 to 0.02962, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0663 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1495 - lr: 1.0000e-04 - 243ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss improved from 0.02962 to 0.02867, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0706 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1466 - lr: 1.0000e-04 - 271ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.02867 to 0.02677, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0657 - val_loss: 0.0268 - val_mse: 0.0268 - val_mae: 0.1406 - lr: 1.0000e-04 - 252ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.02677
48/48 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0631 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1410 - lr: 1.0000e-04 - 271ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.02677
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0636 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1423 - lr: 1.0000e-04 - 232ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss improved from 0.02677 to 0.02567, saving model to LSTM7.h5
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.0257 - val_mse: 0.0257 - val_mae: 0.1371 - lr: 1.0000e-04 - 242ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0643 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1380 - lr: 1.0000e-04 - 198ms/epoch - 4ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0275 - val_mse: 0.0275 - val_mae: 0.1432 - lr: 1.0000e-04 - 200ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0583 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1469 - lr: 1.0000e-04 - 226ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0598 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1493 - lr: 1.0000e-04 - 207ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00029: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0618 - val_loss: 0.0302 - val_mse: 0.0302 - val_mae: 0.1514 - lr: 1.0000e-04 - 199ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1522 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1532 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0545 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1545 - lr: 1.0000e-05 - 213ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0557 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1551 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00034: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1556 - lr: 1.0000e-05 - 215ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0555 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1561 - lr: 1.0000e-05 - 199ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1567 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0556 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1573 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1575 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0539 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1571 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0543 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1567 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0555 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1568 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0534 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1576 - lr: 1.0000e-05 - 205ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0570 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1578 - lr: 1.0000e-05 - 207ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0513 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1589 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0547 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1591 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0552 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1594 - lr: 1.0000e-05 - 201ms/epoch - 4ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0523 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1594 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0523 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1598 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0525 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1589 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0533 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1594 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0531 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1596 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1591 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1590 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0537 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1583 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0517 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1583 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1589 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1591 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0536 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1589 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0546 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1592 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0551 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1581 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1583 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0557 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1585 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0515 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1593 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0540 - val_loss: 0.0328 - val_mse: 0.0328 - val_mae: 0.1590 - lr: 1.0000e-05 - 241ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1580 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0520 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1578 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0529 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1599 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0551 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1602 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0550 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1597 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.0330 - val_mse: 0.0330 - val_mae: 0.1595 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1604 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0517 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1603 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0534 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1610 - lr: 1.0000e-05 - 203ms/epoch - 4ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.02567
48/48 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0553 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1607 - lr: 1.0000e-05 - 209ms/epoch - 4ms/step
Epoch 00074: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.80 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.48 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15952.568, Time=15.31 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=8.47 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16628.634, Time=10.28 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16462.206, Time=25.09 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16848.298, Time=13.32 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.023, Time=6.57 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.619, Time=3.49 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=7.51 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=18.51 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.994, Time=4.01 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.667, Time=4.43 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 126.280 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 18:17:53 BIC -16911.966
Sample: 0 HQIC -17010.204
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.316e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x2 -2.309e-10 6.24e-05 -3.7e-06 1.000 -0.000 0.000
x3 -2.325e-10 6.26e-05 -3.71e-06 1.000 -0.000 0.000
x4 1.0000 6.25e-05 1.6e+04 0.000 1.000 1.000
x5 -2.107e-10 5.96e-05 -3.54e-06 1.000 -0.000 0.000
x6 -7.997e-10 0.000 -7.41e-06 1.000 -0.000 0.000
x7 -2.295e-10 6.22e-05 -3.69e-06 1.000 -0.000 0.000
x8 -2.246e-10 6.15e-05 -3.65e-06 1.000 -0.000 0.000
x9 -1.167e-11 1.25e-05 -9.33e-07 1.000 -2.45e-05 2.45e-05
x10 -4.454e-11 2.66e-05 -1.68e-06 1.000 -5.21e-05 5.21e-05
x11 -2.221e-10 6.11e-05 -3.63e-06 1.000 -0.000 0.000
x12 -2.266e-10 6.18e-05 -3.66e-06 1.000 -0.000 0.000
x13 -2.315e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x14 -1.767e-09 0.000 -1.02e-05 1.000 -0.000 0.000
x15 -2.11e-10 5.93e-05 -3.56e-06 1.000 -0.000 0.000
x16 -5.283e-10 9.45e-05 -5.59e-06 1.000 -0.000 0.000
x17 -2.098e-10 6.01e-05 -3.49e-06 1.000 -0.000 0.000
x18 -3.82e-11 2.41e-05 -1.58e-06 1.000 -4.73e-05 4.73e-05
x19 -2.645e-10 6.61e-05 -4e-06 1.000 -0.000 0.000
x20 -2.417e-10 6.21e-05 -3.89e-06 1.000 -0.000 0.000
x21 -4.824e-10 8.83e-05 -5.46e-06 1.000 -0.000 0.000
x22 -3.758e-13 1.19e-11 -0.032 0.975 -2.36e-11 2.29e-11
x23 -1.089e-11 8.42e-11 -0.129 0.897 -1.76e-10 1.54e-10
x24 -2.538e-09 0.000 -1.44e-05 1.000 -0.000 0.000
x25 -2.038e-09 0.000 -1.49e-05 1.000 -0.000 0.000
x26 -3.16e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x27 -2.955e-09 0.000 -1.32e-05 1.000 -0.000 0.000
x28 -1.664e-09 0.000 -9.94e-06 1.000 -0.000 0.000
x29 -1.568e-09 0.000 -9.63e-06 1.000 -0.000 0.000
ar.L1 -0.4923 6.2e-10 -7.94e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 3.6e-10 -5.35e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.71e-10 -2.71e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.41e-09 -5.04e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 51.79 Jarque-Bera (JB): 4012066.18
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.44
Prob(H) (two-sided): 0.00 Kurtosis: 348.68
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.07245, saving model to LSTM7.h5
16/16 - 2s - loss: 0.6031 - mse: 0.6031 - mae: 0.5816 - val_loss: 0.0725 - val_mse: 0.0725 - val_mae: 0.2175 - lr: 0.0010 - 2s/epoch - 131ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.07245 to 0.05426, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0982 - mse: 0.0982 - mae: 0.2715 - val_loss: 0.0543 - val_mse: 0.0543 - val_mae: 0.1852 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.05426 to 0.05329, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0308 - mse: 0.0308 - mae: 0.1410 - val_loss: 0.0533 - val_mse: 0.0533 - val_mae: 0.1773 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.05329 to 0.05020, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0288 - mse: 0.0288 - mae: 0.1320 - val_loss: 0.0502 - val_mse: 0.0502 - val_mae: 0.1733 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05020
16/16 - 0s - loss: 0.0203 - mse: 0.0203 - mae: 0.1148 - val_loss: 0.0509 - val_mse: 0.0509 - val_mae: 0.1750 - lr: 0.0010 - 81ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.05020 to 0.04909, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0208 - mse: 0.0208 - mae: 0.1149 - val_loss: 0.0491 - val_mse: 0.0491 - val_mae: 0.1715 - lr: 0.0010 - 89ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.04909 to 0.04433, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0177 - mse: 0.0177 - mae: 0.1061 - val_loss: 0.0443 - val_mse: 0.0443 - val_mae: 0.1622 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.04433 to 0.04346, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.0996 - val_loss: 0.0435 - val_mse: 0.0435 - val_mae: 0.1608 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.04346 to 0.04201, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0956 - val_loss: 0.0420 - val_mse: 0.0420 - val_mae: 0.1584 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.04201 to 0.04017, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0151 - mse: 0.0151 - mae: 0.0974 - val_loss: 0.0402 - val_mse: 0.0402 - val_mae: 0.1550 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.04017
16/16 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0884 - val_loss: 0.0406 - val_mse: 0.0406 - val_mae: 0.1565 - lr: 0.0010 - 97ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.04017 to 0.03925, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0892 - val_loss: 0.0393 - val_mse: 0.0393 - val_mae: 0.1539 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.03925 to 0.03614, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0868 - val_loss: 0.0361 - val_mse: 0.0361 - val_mae: 0.1468 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.03614 to 0.03480, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0120 - mse: 0.0120 - mae: 0.0858 - val_loss: 0.0348 - val_mse: 0.0348 - val_mae: 0.1438 - lr: 0.0010 - 89ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss improved from 0.03480 to 0.03399, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0779 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1421 - lr: 0.0010 - 108ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0109 - mse: 0.0109 - mae: 0.0816 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1467 - lr: 0.0010 - 74ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0100 - mse: 0.0100 - mae: 0.0800 - val_loss: 0.0348 - val_mse: 0.0348 - val_mae: 0.1455 - lr: 0.0010 - 78ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0093 - mse: 0.0093 - mae: 0.0776 - val_loss: 0.0355 - val_mse: 0.0355 - val_mae: 0.1480 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0094 - mse: 0.0094 - mae: 0.0751 - val_loss: 0.0364 - val_mse: 0.0364 - val_mae: 0.1509 - lr: 0.0010 - 85ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00020: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0730 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1479 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0715 - val_loss: 0.0347 - val_mse: 0.0347 - val_mae: 0.1473 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0712 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1481 - lr: 1.0000e-04 - 83ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0705 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1481 - lr: 1.0000e-04 - 77ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0673 - val_loss: 0.0348 - val_mse: 0.0348 - val_mae: 0.1477 - lr: 1.0000e-04 - 76ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00025: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0728 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1467 - lr: 1.0000e-04 - 80ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0733 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1467 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0679 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1467 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0743 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1466 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0715 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1466 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00030: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0717 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1466 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0711 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0734 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0091 - mse: 0.0091 - mae: 0.0733 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0703 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1466 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0695 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1466 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0692 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0713 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0693 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1464 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0717 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1464 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0697 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1464 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0725 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0688 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1464 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0689 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1464 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0708 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1464 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0690 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0726 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1464 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0748 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0669 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1464 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0684 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0692 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0730 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0683 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0689 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1461 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0684 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1461 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0714 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1462 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0692 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1463 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0702 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1465 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0700 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1465 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0712 - val_loss: 0.0343 - val_mse: 0.0343 - val_mae: 0.1466 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0686 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1465 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0086 - mse: 0.0086 - mae: 0.0717 - val_loss: 0.0342 - val_mse: 0.0342 - val_mae: 0.1464 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0704 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1463 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0654 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1462 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.03399
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0709 - val_loss: 0.0341 - val_mse: 0.0341 - val_mae: 0.1461 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss improved from 0.03399 to 0.03398, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0715 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1458 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss improved from 0.03398 to 0.03393, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0707 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1457 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.03393
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0694 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1458 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.03393
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0687 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1459 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.03393
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0666 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1459 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.03393
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0689 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1458 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.03393
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0702 - val_loss: 0.0340 - val_mse: 0.0340 - val_mae: 0.1458 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 72/500
Epoch 00072: val_loss improved from 0.03393 to 0.03390, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0700 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1457 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 73/500
Epoch 00073: val_loss improved from 0.03390 to 0.03384, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0689 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1455 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.03384
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0690 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1456 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.03384
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0687 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1457 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.03384
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0704 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1456 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 77/500
Epoch 00077: val_loss improved from 0.03384 to 0.03380, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0693 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1455 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0667 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1456 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0088 - mse: 0.0088 - mae: 0.0735 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1456 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0676 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1456 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0726 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1457 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0085 - mse: 0.0085 - mae: 0.0714 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1457 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0693 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1459 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0706 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1458 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 85/500
Epoch 00085: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0678 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1458 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.03380
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0691 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1457 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 87/500
Epoch 00087: val_loss improved from 0.03380 to 0.03380, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0698 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1456 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 88/500
Epoch 00088: val_loss improved from 0.03380 to 0.03380, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0693 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1456 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 89/500
Epoch 00089: val_loss improved from 0.03380 to 0.03375, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0686 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1455 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 90/500
Epoch 00090: val_loss improved from 0.03375 to 0.03368, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0677 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1453 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 91/500
Epoch 00091: val_loss improved from 0.03368 to 0.03367, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0719 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1453 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 92/500
Epoch 00092: val_loss improved from 0.03367 to 0.03367, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0683 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1453 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 93/500
Epoch 00093: val_loss did not improve from 0.03367
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0666 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1455 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 94/500
Epoch 00094: val_loss did not improve from 0.03367
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0695 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1454 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 95/500
Epoch 00095: val_loss improved from 0.03367 to 0.03358, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0703 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1451 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 96/500
Epoch 00096: val_loss improved from 0.03358 to 0.03357, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0645 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1451 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 97/500
Epoch 00097: val_loss improved from 0.03357 to 0.03356, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0670 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1451 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 98/500
Epoch 00098: val_loss did not improve from 0.03356
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0661 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1451 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 99/500
Epoch 00099: val_loss improved from 0.03356 to 0.03352, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0677 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1450 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 100/500
Epoch 00100: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0665 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1453 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 101/500
Epoch 00101: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0679 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1454 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 102/500
Epoch 00102: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0686 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1456 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 103/500
Epoch 00103: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0687 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1456 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 104/500
Epoch 00104: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0703 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1457 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 105/500
Epoch 00105: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0686 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1457 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 106/500
Epoch 00106: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0687 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1456 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 107/500
Epoch 00107: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0679 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1455 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 108/500
Epoch 00108: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0664 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1455 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 109/500
Epoch 00109: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0697 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1457 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 110/500
Epoch 00110: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0701 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1461 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 111/500
Epoch 00111: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0683 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1462 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 112/500
Epoch 00112: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0716 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1462 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 113/500
Epoch 00113: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0667 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1462 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 114/500
Epoch 00114: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0695 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1459 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 115/500
Epoch 00115: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0660 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1460 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 116/500
Epoch 00116: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0706 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1460 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 117/500
Epoch 00117: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0684 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1462 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 118/500
Epoch 00118: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0652 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1463 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 119/500
Epoch 00119: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0680 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1463 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 120/500
Epoch 00120: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0681 - val_loss: 0.0338 - val_mse: 0.0338 - val_mae: 0.1462 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 121/500
Epoch 00121: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0683 - val_loss: 0.0339 - val_mse: 0.0339 - val_mae: 0.1463 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 122/500
Epoch 00122: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0697 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1460 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 123/500
Epoch 00123: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0671 - val_loss: 0.0337 - val_mse: 0.0337 - val_mae: 0.1460 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 124/500
Epoch 00124: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0684 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1458 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 125/500
Epoch 00125: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0678 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1456 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 126/500
Epoch 00126: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0726 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1457 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 127/500
Epoch 00127: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0678 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1456 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 128/500
Epoch 00128: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0688 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1456 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 129/500
Epoch 00129: val_loss did not improve from 0.03352
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0662 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1456 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 130/500
Epoch 00130: val_loss improved from 0.03352 to 0.03350, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0708 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1455 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 131/500
Epoch 00131: val_loss did not improve from 0.03350
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0639 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1455 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 132/500
Epoch 00132: val_loss improved from 0.03350 to 0.03347, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0718 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1454 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 133/500
Epoch 00133: val_loss did not improve from 0.03347
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0703 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1457 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 134/500
Epoch 00134: val_loss did not improve from 0.03347
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0661 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1456 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 135/500
Epoch 00135: val_loss improved from 0.03347 to 0.03346, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0656 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1455 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 136/500
Epoch 00136: val_loss improved from 0.03346 to 0.03336, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0683 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1452 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 137/500
Epoch 00137: val_loss improved from 0.03336 to 0.03328, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0695 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1450 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 138/500
Epoch 00138: val_loss improved from 0.03328 to 0.03321, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0741 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1449 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 139/500
Epoch 00139: val_loss improved from 0.03321 to 0.03318, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0634 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1448 - lr: 1.0000e-05 - 111ms/epoch - 7ms/step
Epoch 140/500
Epoch 00140: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0656 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1450 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 141/500
Epoch 00141: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0631 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1451 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 142/500
Epoch 00142: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0663 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1454 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 143/500
Epoch 00143: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0672 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1455 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 144/500
Epoch 00144: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0709 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1456 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 145/500
Epoch 00145: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0682 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1456 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 146/500
Epoch 00146: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0680 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1458 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 147/500
Epoch 00147: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0637 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1460 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 148/500
Epoch 00148: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0646 - val_loss: 0.0336 - val_mse: 0.0336 - val_mae: 0.1461 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 149/500
Epoch 00149: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0663 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1459 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 150/500
Epoch 00150: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0662 - val_loss: 0.0335 - val_mse: 0.0335 - val_mae: 0.1458 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 151/500
Epoch 00151: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0651 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1455 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 152/500
Epoch 00152: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0653 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1455 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 153/500
Epoch 00153: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0688 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1454 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 154/500
Epoch 00154: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0669 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1456 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 155/500
Epoch 00155: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0637 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1458 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 156/500
Epoch 00156: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0692 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1458 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 157/500
Epoch 00157: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0701 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1455 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 158/500
Epoch 00158: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0655 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1453 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 159/500
Epoch 00159: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0665 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1455 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 160/500
Epoch 00160: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0652 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1458 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 161/500
Epoch 00161: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0685 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1456 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 162/500
Epoch 00162: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0683 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1456 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 163/500
Epoch 00163: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0663 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1458 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 164/500
Epoch 00164: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0674 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1458 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 165/500
Epoch 00165: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0679 - val_loss: 0.0334 - val_mse: 0.0334 - val_mae: 0.1459 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 166/500
Epoch 00166: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0678 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1455 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 167/500
Epoch 00167: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0665 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1457 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 168/500
Epoch 00168: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0673 - val_loss: 0.0333 - val_mse: 0.0333 - val_mae: 0.1457 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 169/500
Epoch 00169: val_loss did not improve from 0.03318
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0655 - val_loss: 0.0332 - val_mse: 0.0332 - val_mae: 0.1456 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 170/500
Epoch 00170: val_loss improved from 0.03318 to 0.03314, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0683 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1454 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 171/500
Epoch 00171: val_loss improved from 0.03314 to 0.03308, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0646 - val_loss: 0.0331 - val_mse: 0.0331 - val_mae: 0.1452 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 172/500
Epoch 00172: val_loss improved from 0.03308 to 0.03294, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0662 - val_loss: 0.0329 - val_mse: 0.0329 - val_mae: 0.1448 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 173/500
Epoch 00173: val_loss improved from 0.03294 to 0.03271, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0669 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1442 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 174/500
Epoch 00174: val_loss improved from 0.03271 to 0.03261, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0659 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1440 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 175/500
Epoch 00175: val_loss did not improve from 0.03261
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0641 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1442 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 176/500
Epoch 00176: val_loss did not improve from 0.03261
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0653 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1443 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 177/500
Epoch 00177: val_loss did not improve from 0.03261
16/16 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0685 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1442 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 178/500
Epoch 00178: val_loss did not improve from 0.03261
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0639 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1443 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 179/500
Epoch 00179: val_loss did not improve from 0.03261
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0622 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1442 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 180/500
Epoch 00180: val_loss improved from 0.03261 to 0.03253, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0639 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1439 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 181/500
Epoch 00181: val_loss improved from 0.03253 to 0.03239, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0658 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1435 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 182/500
Epoch 00182: val_loss improved from 0.03239 to 0.03237, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0661 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1435 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 183/500
Epoch 00183: val_loss improved from 0.03237 to 0.03219, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0667 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1430 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 184/500
Epoch 00184: val_loss improved from 0.03219 to 0.03213, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0641 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1429 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 185/500
Epoch 00185: val_loss improved from 0.03213 to 0.03205, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1427 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 186/500
Epoch 00186: val_loss improved from 0.03205 to 0.03193, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0667 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1423 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 187/500
Epoch 00187: val_loss improved from 0.03193 to 0.03192, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0659 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1423 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 188/500
Epoch 00188: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0671 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1425 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 189/500
Epoch 00189: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0666 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1428 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 190/500
Epoch 00190: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0631 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1430 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 191/500
Epoch 00191: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0673 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1432 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 192/500
Epoch 00192: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0640 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1436 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 193/500
Epoch 00193: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0659 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1438 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 194/500
Epoch 00194: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0672 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1438 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 195/500
Epoch 00195: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0664 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1439 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 196/500
Epoch 00196: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0656 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1438 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 197/500
Epoch 00197: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0618 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1435 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 198/500
Epoch 00198: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0653 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1435 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 199/500
Epoch 00199: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0623 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1438 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 200/500
Epoch 00200: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0656 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1439 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 201/500
Epoch 00201: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0653 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1441 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 202/500
Epoch 00202: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0657 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1441 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 203/500
Epoch 00203: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0639 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1440 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 204/500
Epoch 00204: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0648 - val_loss: 0.0325 - val_mse: 0.0325 - val_mae: 0.1443 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 205/500
Epoch 00205: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0652 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1447 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 206/500
Epoch 00206: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0632 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1448 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 207/500
Epoch 00207: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0646 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1449 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 208/500
Epoch 00208: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0638 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1452 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 209/500
Epoch 00209: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0660 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1452 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 210/500
Epoch 00210: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0688 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1449 - lr: 1.0000e-05 - 74ms/epoch - 5ms/step
Epoch 211/500
Epoch 00211: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0641 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1448 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 212/500
Epoch 00212: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0651 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1453 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 213/500
Epoch 00213: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0628 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1453 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 214/500
Epoch 00214: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0634 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1452 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 215/500
Epoch 00215: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0624 - val_loss: 0.0327 - val_mse: 0.0327 - val_mae: 0.1453 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 216/500
Epoch 00216: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0644 - val_loss: 0.0326 - val_mse: 0.0326 - val_mae: 0.1449 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 217/500
Epoch 00217: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0639 - val_loss: 0.0324 - val_mse: 0.0324 - val_mae: 0.1443 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 218/500
Epoch 00218: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0621 - val_loss: 0.0323 - val_mse: 0.0323 - val_mae: 0.1443 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 219/500
Epoch 00219: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0633 - val_loss: 0.0322 - val_mse: 0.0322 - val_mae: 0.1439 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 220/500
Epoch 00220: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0634 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1434 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 221/500
Epoch 00221: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0639 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1434 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 222/500
Epoch 00222: val_loss did not improve from 0.03192
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0634 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1434 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 223/500
Epoch 00223: val_loss improved from 0.03192 to 0.03191, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0625 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1433 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 224/500
Epoch 00224: val_loss improved from 0.03191 to 0.03190, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0650 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1433 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 225/500
Epoch 00225: val_loss did not improve from 0.03190
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0619 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1434 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 226/500
Epoch 00226: val_loss did not improve from 0.03190
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0630 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1434 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 227/500
Epoch 00227: val_loss improved from 0.03190 to 0.03186, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0626 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1432 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 228/500
Epoch 00228: val_loss improved from 0.03186 to 0.03178, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0650 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1430 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 229/500
Epoch 00229: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0630 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1433 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 230/500
Epoch 00230: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0602 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1439 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 231/500
Epoch 00231: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0643 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1439 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 232/500
Epoch 00232: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0621 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1436 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 233/500
Epoch 00233: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0582 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1437 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 234/500
Epoch 00234: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0642 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1436 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 235/500
Epoch 00235: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0650 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1437 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 236/500
Epoch 00236: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0652 - val_loss: 0.0321 - val_mse: 0.0321 - val_mae: 0.1441 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 237/500
Epoch 00237: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0615 - val_loss: 0.0320 - val_mse: 0.0320 - val_mae: 0.1440 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 238/500
Epoch 00238: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0654 - val_loss: 0.0319 - val_mse: 0.0319 - val_mae: 0.1435 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 239/500
Epoch 00239: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0606 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1435 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 240/500
Epoch 00240: val_loss did not improve from 0.03178
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0647 - val_loss: 0.0318 - val_mse: 0.0318 - val_mae: 0.1434 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 241/500
Epoch 00241: val_loss improved from 0.03178 to 0.03168, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0619 - val_loss: 0.0317 - val_mse: 0.0317 - val_mae: 0.1431 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 242/500
Epoch 00242: val_loss improved from 0.03168 to 0.03161, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0627 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1429 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 243/500
Epoch 00243: val_loss did not improve from 0.03161
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0621 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1430 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 244/500
Epoch 00244: val_loss improved from 0.03161 to 0.03161, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0634 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1430 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 245/500
Epoch 00245: val_loss improved from 0.03161 to 0.03150, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0661 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1428 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 246/500
Epoch 00246: val_loss improved from 0.03150 to 0.03136, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0631 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1424 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 247/500
Epoch 00247: val_loss improved from 0.03136 to 0.03122, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0643 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1420 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 248/500
Epoch 00248: val_loss did not improve from 0.03122
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0625 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1425 - lr: 1.0000e-05 - 125ms/epoch - 8ms/step
Epoch 249/500
Epoch 00249: val_loss did not improve from 0.03122
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0638 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1429 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 250/500
Epoch 00250: val_loss did not improve from 0.03122
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0624 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1429 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 251/500
Epoch 00251: val_loss did not improve from 0.03122
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0638 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1428 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 252/500
Epoch 00252: val_loss did not improve from 0.03122
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0627 - val_loss: 0.0313 - val_mse: 0.0313 - val_mae: 0.1423 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 253/500
Epoch 00253: val_loss improved from 0.03122 to 0.03121, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0659 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1421 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 254/500
Epoch 00254: val_loss did not improve from 0.03121
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0615 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1421 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 255/500
Epoch 00255: val_loss improved from 0.03121 to 0.03116, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0617 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1420 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 256/500
Epoch 00256: val_loss improved from 0.03116 to 0.03108, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0614 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1417 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 257/500
Epoch 00257: val_loss improved from 0.03108 to 0.03100, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0594 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1416 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 258/500
Epoch 00258: val_loss improved from 0.03100 to 0.03095, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0613 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1415 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 259/500
Epoch 00259: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0638 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1417 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 260/500
Epoch 00260: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0627 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1416 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 261/500
Epoch 00261: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0669 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1417 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 262/500
Epoch 00262: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0622 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1423 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 263/500
Epoch 00263: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0614 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1431 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 264/500
Epoch 00264: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0610 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1433 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 265/500
Epoch 00265: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0608 - val_loss: 0.0316 - val_mse: 0.0316 - val_mae: 0.1433 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 266/500
Epoch 00266: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0619 - val_loss: 0.0315 - val_mse: 0.0315 - val_mae: 0.1432 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 267/500
Epoch 00267: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.0314 - val_mse: 0.0314 - val_mae: 0.1429 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 268/500
Epoch 00268: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0600 - val_loss: 0.0312 - val_mse: 0.0312 - val_mae: 0.1424 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 269/500
Epoch 00269: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0613 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1423 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 270/500
Epoch 00270: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0561 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1423 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 271/500
Epoch 00271: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0643 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1420 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 272/500
Epoch 00272: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0616 - val_loss: 0.0310 - val_mse: 0.0310 - val_mae: 0.1421 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 273/500
Epoch 00273: val_loss did not improve from 0.03095
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0593 - val_loss: 0.0311 - val_mse: 0.0311 - val_mae: 0.1423 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 274/500
Epoch 00274: val_loss improved from 0.03095 to 0.03077, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0603 - val_loss: 0.0308 - val_mse: 0.0308 - val_mae: 0.1414 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 275/500
Epoch 00275: val_loss improved from 0.03077 to 0.03056, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0612 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1408 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 276/500
Epoch 00276: val_loss improved from 0.03056 to 0.03049, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0613 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1406 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 277/500
Epoch 00277: val_loss did not improve from 0.03049
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0616 - val_loss: 0.0305 - val_mse: 0.0305 - val_mae: 0.1407 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 278/500
Epoch 00278: val_loss improved from 0.03049 to 0.03033, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0624 - val_loss: 0.0303 - val_mse: 0.0303 - val_mae: 0.1402 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 279/500
Epoch 00279: val_loss improved from 0.03033 to 0.03008, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0620 - val_loss: 0.0301 - val_mse: 0.0301 - val_mae: 0.1396 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 280/500
Epoch 00280: val_loss improved from 0.03008 to 0.03007, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0610 - val_loss: 0.0301 - val_mse: 0.0301 - val_mae: 0.1396 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 281/500
Epoch 00281: val_loss did not improve from 0.03007
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0605 - val_loss: 0.0302 - val_mse: 0.0302 - val_mae: 0.1399 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 282/500
Epoch 00282: val_loss did not improve from 0.03007
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0647 - val_loss: 0.0302 - val_mse: 0.0302 - val_mae: 0.1399 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 283/500
Epoch 00283: val_loss improved from 0.03007 to 0.02989, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0619 - val_loss: 0.0299 - val_mse: 0.0299 - val_mae: 0.1392 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 284/500
Epoch 00284: val_loss improved from 0.02989 to 0.02979, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0612 - val_loss: 0.0298 - val_mse: 0.0298 - val_mae: 0.1389 - lr: 1.0000e-05 - 130ms/epoch - 8ms/step
Epoch 285/500
Epoch 00285: val_loss improved from 0.02979 to 0.02966, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0600 - val_loss: 0.0297 - val_mse: 0.0297 - val_mae: 0.1386 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 286/500
Epoch 00286: val_loss improved from 0.02966 to 0.02951, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0595 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1382 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 287/500
Epoch 00287: val_loss improved from 0.02951 to 0.02924, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0625 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1374 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 288/500
Epoch 00288: val_loss improved from 0.02924 to 0.02917, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1372 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 289/500
Epoch 00289: val_loss improved from 0.02917 to 0.02916, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0599 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1372 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 290/500
Epoch 00290: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0608 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1373 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 291/500
Epoch 00291: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0585 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1373 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 292/500
Epoch 00292: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0593 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1373 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 293/500
Epoch 00293: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0590 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1380 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 294/500
Epoch 00294: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0608 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1381 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 295/500
Epoch 00295: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0584 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1382 - lr: 1.0000e-05 - 77ms/epoch - 5ms/step
Epoch 296/500
Epoch 00296: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0605 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1386 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 297/500
Epoch 00297: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0609 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1383 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 298/500
Epoch 00298: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0590 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1386 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 299/500
Epoch 00299: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0603 - val_loss: 0.0297 - val_mse: 0.0297 - val_mae: 0.1390 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 300/500
Epoch 00300: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0637 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1388 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 301/500
Epoch 00301: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.0293 - val_mse: 0.0293 - val_mae: 0.1382 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 302/500
Epoch 00302: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0611 - val_loss: 0.0294 - val_mse: 0.0294 - val_mae: 0.1384 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 303/500
Epoch 00303: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0613 - val_loss: 0.0296 - val_mse: 0.0296 - val_mae: 0.1388 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 304/500
Epoch 00304: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0613 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1386 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 305/500
Epoch 00305: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0619 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1388 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 306/500
Epoch 00306: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0601 - val_loss: 0.0295 - val_mse: 0.0295 - val_mae: 0.1388 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 307/500
Epoch 00307: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0600 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1380 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 308/500
Epoch 00308: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0599 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1379 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 309/500
Epoch 00309: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0599 - val_loss: 0.0292 - val_mse: 0.0292 - val_mae: 0.1378 - lr: 1.0000e-05 - 73ms/epoch - 5ms/step
Epoch 310/500
Epoch 00310: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0589 - val_loss: 0.0293 - val_mse: 0.0293 - val_mae: 0.1381 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 311/500
Epoch 00311: val_loss did not improve from 0.02916
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0602 - val_loss: 0.0293 - val_mse: 0.0293 - val_mae: 0.1381 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 312/500
Epoch 00312: val_loss improved from 0.02916 to 0.02910, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.0291 - val_mse: 0.0291 - val_mae: 0.1375 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 313/500
Epoch 00313: val_loss improved from 0.02910 to 0.02891, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0607 - val_loss: 0.0289 - val_mse: 0.0289 - val_mae: 0.1370 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 314/500
Epoch 00314: val_loss improved from 0.02891 to 0.02882, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0593 - val_loss: 0.0288 - val_mse: 0.0288 - val_mae: 0.1368 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 315/500
Epoch 00315: val_loss improved from 0.02882 to 0.02868, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0642 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1364 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 316/500
Epoch 00316: val_loss improved from 0.02868 to 0.02857, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0588 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1361 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 317/500
Epoch 00317: val_loss did not improve from 0.02857
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0602 - val_loss: 0.0287 - val_mse: 0.0287 - val_mae: 0.1364 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 318/500
Epoch 00318: val_loss did not improve from 0.02857
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0602 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1362 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 319/500
Epoch 00319: val_loss did not improve from 0.02857
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1363 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 320/500
Epoch 00320: val_loss did not improve from 0.02857
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0608 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1363 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 321/500
Epoch 00321: val_loss improved from 0.02857 to 0.02848, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0619 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1361 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 322/500
Epoch 00322: val_loss did not improve from 0.02848
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0580 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1363 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 323/500
Epoch 00323: val_loss did not improve from 0.02848
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1364 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 324/500
Epoch 00324: val_loss did not improve from 0.02848
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0570 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1363 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 325/500
Epoch 00325: val_loss improved from 0.02848 to 0.02838, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0605 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1358 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 326/500
Epoch 00326: val_loss improved from 0.02838 to 0.02836, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0586 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1358 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 327/500
Epoch 00327: val_loss did not improve from 0.02836
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0581 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1362 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 328/500
Epoch 00328: val_loss did not improve from 0.02836
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0570 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1362 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 329/500
Epoch 00329: val_loss improved from 0.02836 to 0.02826, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0597 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1356 - lr: 1.0000e-05 - 107ms/epoch - 7ms/step
Epoch 330/500
Epoch 00330: val_loss did not improve from 0.02826
16/16 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0611 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1358 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 331/500
Epoch 00331: val_loss did not improve from 0.02826
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0603 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1364 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 332/500
Epoch 00332: val_loss did not improve from 0.02826
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0597 - val_loss: 0.0286 - val_mse: 0.0286 - val_mae: 0.1367 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 333/500
Epoch 00333: val_loss did not improve from 0.02826
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0599 - val_loss: 0.0285 - val_mse: 0.0285 - val_mae: 0.1364 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 334/500
Epoch 00334: val_loss did not improve from 0.02826
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1360 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 335/500
Epoch 00335: val_loss did not improve from 0.02826
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.0283 - val_mse: 0.0283 - val_mae: 0.1358 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 336/500
Epoch 00336: val_loss improved from 0.02826 to 0.02824, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0569 - val_loss: 0.0282 - val_mse: 0.0282 - val_mae: 0.1358 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 337/500
Epoch 00337: val_loss improved from 0.02824 to 0.02795, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0595 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1350 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 338/500
Epoch 00338: val_loss improved from 0.02795 to 0.02794, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0591 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1350 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 339/500
Epoch 00339: val_loss did not improve from 0.02794
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1352 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 340/500
Epoch 00340: val_loss did not improve from 0.02794
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1356 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 341/500
Epoch 00341: val_loss did not improve from 0.02794
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0570 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1355 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 342/500
Epoch 00342: val_loss did not improve from 0.02794
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1355 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 343/500
Epoch 00343: val_loss did not improve from 0.02794
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1356 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 344/500
Epoch 00344: val_loss did not improve from 0.02794
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0598 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1352 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 345/500
Epoch 00345: val_loss improved from 0.02794 to 0.02793, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1351 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 346/500
Epoch 00346: val_loss did not improve from 0.02793
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0594 - val_loss: 0.0280 - val_mse: 0.0280 - val_mae: 0.1353 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 347/500
Epoch 00347: val_loss did not improve from 0.02793
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0281 - val_mse: 0.0281 - val_mae: 0.1356 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 348/500
Epoch 00348: val_loss improved from 0.02793 to 0.02792, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0597 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1352 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 349/500
Epoch 00349: val_loss improved from 0.02792 to 0.02786, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0577 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1350 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 350/500
Epoch 00350: val_loss improved from 0.02786 to 0.02780, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0278 - val_mse: 0.0278 - val_mae: 0.1348 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 351/500
Epoch 00351: val_loss improved from 0.02780 to 0.02757, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0554 - val_loss: 0.0276 - val_mse: 0.0276 - val_mae: 0.1341 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 352/500
Epoch 00352: val_loss improved from 0.02757 to 0.02746, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.0275 - val_mse: 0.0275 - val_mae: 0.1338 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 353/500
Epoch 00353: val_loss improved from 0.02746 to 0.02734, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0575 - val_loss: 0.0273 - val_mse: 0.0273 - val_mae: 0.1335 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 354/500
Epoch 00354: val_loss improved from 0.02734 to 0.02710, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0575 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1328 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 355/500
Epoch 00355: val_loss improved from 0.02710 to 0.02709, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0576 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1329 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 356/500
Epoch 00356: val_loss did not improve from 0.02709
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0602 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1330 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 357/500
Epoch 00357: val_loss did not improve from 0.02709
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0589 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1333 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 358/500
Epoch 00358: val_loss did not improve from 0.02709
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0576 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1334 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 359/500
Epoch 00359: val_loss did not improve from 0.02709
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0601 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1332 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 360/500
Epoch 00360: val_loss improved from 0.02709 to 0.02703, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0599 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1328 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 361/500
Epoch 00361: val_loss improved from 0.02703 to 0.02695, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0573 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1326 - lr: 1.0000e-05 - 115ms/epoch - 7ms/step
Epoch 362/500
Epoch 00362: val_loss improved from 0.02695 to 0.02691, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0564 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1325 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 363/500
Epoch 00363: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0579 - val_loss: 0.0270 - val_mse: 0.0270 - val_mae: 0.1327 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 364/500
Epoch 00364: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0582 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1330 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 365/500
Epoch 00365: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0597 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1334 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 366/500
Epoch 00366: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0272 - val_mse: 0.0272 - val_mae: 0.1336 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 367/500
Epoch 00367: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0572 - val_loss: 0.0271 - val_mse: 0.0271 - val_mae: 0.1333 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 368/500
Epoch 00368: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0591 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1327 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 369/500
Epoch 00369: val_loss did not improve from 0.02691
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1327 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 370/500
Epoch 00370: val_loss improved from 0.02691 to 0.02685, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0582 - val_loss: 0.0269 - val_mse: 0.0269 - val_mae: 0.1325 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 371/500
Epoch 00371: val_loss improved from 0.02685 to 0.02660, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0568 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1317 - lr: 1.0000e-05 - 121ms/epoch - 8ms/step
Epoch 372/500
Epoch 00372: val_loss improved from 0.02660 to 0.02651, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0605 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1315 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 373/500
Epoch 00373: val_loss improved from 0.02651 to 0.02637, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0571 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1311 - lr: 1.0000e-05 - 110ms/epoch - 7ms/step
Epoch 374/500
Epoch 00374: val_loss did not improve from 0.02637
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0571 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1312 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 375/500
Epoch 00375: val_loss improved from 0.02637 to 0.02636, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0588 - val_loss: 0.0264 - val_mse: 0.0264 - val_mae: 0.1311 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 376/500
Epoch 00376: val_loss improved from 0.02636 to 0.02627, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0263 - val_mse: 0.0263 - val_mae: 0.1309 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 377/500
Epoch 00377: val_loss did not improve from 0.02627
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0263 - val_mse: 0.0263 - val_mae: 0.1311 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 378/500
Epoch 00378: val_loss improved from 0.02627 to 0.02616, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0584 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1306 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 379/500
Epoch 00379: val_loss did not improve from 0.02616
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0552 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1307 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 380/500
Epoch 00380: val_loss improved from 0.02616 to 0.02615, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0262 - val_mse: 0.0262 - val_mae: 0.1306 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 381/500
Epoch 00381: val_loss improved from 0.02615 to 0.02593, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0568 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1299 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 382/500
Epoch 00382: val_loss did not improve from 0.02593
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0587 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1300 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 383/500
Epoch 00383: val_loss did not improve from 0.02593
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0586 - val_loss: 0.0260 - val_mse: 0.0260 - val_mae: 0.1301 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 384/500
Epoch 00384: val_loss improved from 0.02593 to 0.02583, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0597 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1297 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 385/500
Epoch 00385: val_loss improved from 0.02583 to 0.02577, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.0258 - val_mse: 0.0258 - val_mae: 0.1295 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 386/500
Epoch 00386: val_loss improved from 0.02577 to 0.02556, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0580 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1288 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 387/500
Epoch 00387: val_loss improved from 0.02556 to 0.02552, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0565 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1288 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 388/500
Epoch 00388: val_loss improved from 0.02552 to 0.02527, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1280 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 389/500
Epoch 00389: val_loss improved from 0.02527 to 0.02519, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0559 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1278 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 390/500
Epoch 00390: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1281 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 391/500
Epoch 00391: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0566 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1287 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 392/500
Epoch 00392: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0582 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1293 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 393/500
Epoch 00393: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0568 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1293 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 394/500
Epoch 00394: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0586 - val_loss: 0.0256 - val_mse: 0.0256 - val_mae: 0.1291 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 395/500
Epoch 00395: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0562 - val_loss: 0.0255 - val_mse: 0.0255 - val_mae: 0.1290 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 396/500
Epoch 00396: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0559 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1283 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 397/500
Epoch 00397: val_loss improved from 0.02519 to 0.02519, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0544 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1281 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 398/500
Epoch 00398: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.0253 - val_mse: 0.0253 - val_mae: 0.1284 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 399/500
Epoch 00399: val_loss did not improve from 0.02519
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1282 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 400/500
Epoch 00400: val_loss improved from 0.02519 to 0.02503, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0558 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1277 - lr: 1.0000e-05 - 122ms/epoch - 8ms/step
Epoch 401/500
Epoch 00401: val_loss improved from 0.02503 to 0.02499, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0577 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1276 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 402/500
Epoch 00402: val_loss did not improve from 0.02499
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1280 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 403/500
Epoch 00403: val_loss improved from 0.02499 to 0.02495, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0545 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1275 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 404/500
Epoch 00404: val_loss improved from 0.02495 to 0.02491, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1275 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 405/500
Epoch 00405: val_loss did not improve from 0.02491
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1277 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 406/500
Epoch 00406: val_loss did not improve from 0.02491
16/16 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0592 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1281 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 407/500
Epoch 00407: val_loss did not improve from 0.02491
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0579 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1280 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 408/500
Epoch 00408: val_loss did not improve from 0.02491
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0531 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1280 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 409/500
Epoch 00409: val_loss improved from 0.02491 to 0.02490, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1275 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 410/500
Epoch 00410: val_loss improved from 0.02490 to 0.02485, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0550 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1274 - lr: 1.0000e-05 - 106ms/epoch - 7ms/step
Epoch 411/500
Epoch 00411: val_loss improved from 0.02485 to 0.02478, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.0248 - val_mse: 0.0248 - val_mae: 0.1271 - lr: 1.0000e-05 - 136ms/epoch - 8ms/step
Epoch 412/500
Epoch 00412: val_loss improved from 0.02478 to 0.02457, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0580 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1265 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 413/500
Epoch 00413: val_loss improved from 0.02457 to 0.02433, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0572 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1258 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 414/500
Epoch 00414: val_loss improved from 0.02433 to 0.02427, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0551 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1256 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 415/500
Epoch 00415: val_loss improved from 0.02427 to 0.02416, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1252 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 416/500
Epoch 00416: val_loss improved from 0.02416 to 0.02416, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0567 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1252 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 417/500
Epoch 00417: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0525 - val_loss: 0.0244 - val_mse: 0.0244 - val_mae: 0.1260 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 418/500
Epoch 00418: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1267 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 419/500
Epoch 00419: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0562 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1268 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 420/500
Epoch 00420: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0537 - val_loss: 0.0248 - val_mse: 0.0248 - val_mae: 0.1275 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 421/500
Epoch 00421: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1286 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 422/500
Epoch 00422: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0560 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1286 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 423/500
Epoch 00423: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0565 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1285 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 424/500
Epoch 00424: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0523 - val_loss: 0.0251 - val_mse: 0.0251 - val_mae: 0.1285 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 425/500
Epoch 00425: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0574 - val_loss: 0.0252 - val_mse: 0.0252 - val_mae: 0.1288 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 426/500
Epoch 00426: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1282 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 427/500
Epoch 00427: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1283 - lr: 1.0000e-05 - 78ms/epoch - 5ms/step
Epoch 428/500
Epoch 00428: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0561 - val_loss: 0.0250 - val_mse: 0.0250 - val_mae: 0.1281 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 429/500
Epoch 00429: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0542 - val_loss: 0.0249 - val_mse: 0.0249 - val_mae: 0.1280 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 430/500
Epoch 00430: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0536 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1275 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 431/500
Epoch 00431: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0545 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1274 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 432/500
Epoch 00432: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1275 - lr: 1.0000e-05 - 76ms/epoch - 5ms/step
Epoch 433/500
Epoch 00433: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0248 - val_mse: 0.0248 - val_mae: 0.1278 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 434/500
Epoch 00434: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0547 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1275 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 435/500
Epoch 00435: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0557 - val_loss: 0.0247 - val_mse: 0.0247 - val_mae: 0.1273 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 436/500
Epoch 00436: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1268 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 437/500
Epoch 00437: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0553 - val_loss: 0.0244 - val_mse: 0.0244 - val_mae: 0.1264 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 438/500
Epoch 00438: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0557 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1262 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 439/500
Epoch 00439: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0578 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1263 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 440/500
Epoch 00440: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0547 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1268 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 441/500
Epoch 00441: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0244 - val_mse: 0.0244 - val_mae: 0.1267 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 442/500
Epoch 00442: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0541 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1269 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 443/500
Epoch 00443: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0552 - val_loss: 0.0245 - val_mse: 0.0245 - val_mae: 0.1269 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 444/500
Epoch 00444: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0563 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1265 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 445/500
Epoch 00445: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0530 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1262 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 446/500
Epoch 00446: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0560 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1264 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 447/500
Epoch 00447: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0563 - val_loss: 0.0243 - val_mse: 0.0243 - val_mae: 0.1265 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 448/500
Epoch 00448: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0558 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1261 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 449/500
Epoch 00449: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0533 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1260 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 450/500
Epoch 00450: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1262 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 451/500
Epoch 00451: val_loss did not improve from 0.02416
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.0242 - val_mse: 0.0242 - val_mae: 0.1261 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 452/500
Epoch 00452: val_loss improved from 0.02416 to 0.02412, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0241 - val_mse: 0.0241 - val_mae: 0.1260 - lr: 1.0000e-05 - 114ms/epoch - 7ms/step
Epoch 453/500
Epoch 00453: val_loss improved from 0.02412 to 0.02397, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0534 - val_loss: 0.0240 - val_mse: 0.0240 - val_mae: 0.1256 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 454/500
Epoch 00454: val_loss improved from 0.02397 to 0.02385, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0238 - val_mse: 0.0238 - val_mae: 0.1252 - lr: 1.0000e-05 - 145ms/epoch - 9ms/step
Epoch 455/500
Epoch 00455: val_loss did not improve from 0.02385
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0550 - val_loss: 0.0240 - val_mse: 0.0240 - val_mae: 0.1255 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 456/500
Epoch 00456: val_loss did not improve from 0.02385
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0552 - val_loss: 0.0239 - val_mse: 0.0239 - val_mae: 0.1253 - lr: 1.0000e-05 - 75ms/epoch - 5ms/step
Epoch 457/500
Epoch 00457: val_loss improved from 0.02385 to 0.02369, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1248 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 458/500
Epoch 00458: val_loss improved from 0.02369 to 0.02366, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0553 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1246 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 459/500
Epoch 00459: val_loss improved from 0.02366 to 0.02365, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0544 - val_loss: 0.0236 - val_mse: 0.0236 - val_mae: 0.1246 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 460/500
Epoch 00460: val_loss improved from 0.02365 to 0.02351, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0536 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1242 - lr: 1.0000e-05 - 138ms/epoch - 9ms/step
Epoch 461/500
Epoch 00461: val_loss improved from 0.02351 to 0.02348, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1241 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 462/500
Epoch 00462: val_loss did not improve from 0.02348
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0541 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1241 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 463/500
Epoch 00463: val_loss improved from 0.02348 to 0.02339, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0546 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1238 - lr: 1.0000e-05 - 89ms/epoch - 6ms/step
Epoch 464/500
Epoch 00464: val_loss improved from 0.02339 to 0.02324, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0551 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1234 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 465/500
Epoch 00465: val_loss did not improve from 0.02324
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1235 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 466/500
Epoch 00466: val_loss improved from 0.02324 to 0.02314, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.0231 - val_mse: 0.0231 - val_mae: 0.1231 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 467/500
Epoch 00467: val_loss improved from 0.02314 to 0.02295, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0537 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1225 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 468/500
Epoch 00468: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1227 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 469/500
Epoch 00469: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0527 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1234 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 470/500
Epoch 00470: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0522 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1241 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 471/500
Epoch 00471: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0529 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1243 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 472/500
Epoch 00472: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1241 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 473/500
Epoch 00473: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0552 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1242 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 474/500
Epoch 00474: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.0234 - val_mse: 0.0234 - val_mae: 0.1240 - lr: 1.0000e-05 - 88ms/epoch - 5ms/step
Epoch 475/500
Epoch 00475: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1238 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 476/500
Epoch 00476: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0572 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1239 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 477/500
Epoch 00477: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0547 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1245 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 478/500
Epoch 00478: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0572 - val_loss: 0.0235 - val_mse: 0.0235 - val_mae: 0.1245 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 479/500
Epoch 00479: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0545 - val_loss: 0.0233 - val_mse: 0.0233 - val_mae: 0.1240 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 480/500
Epoch 00480: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1236 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 481/500
Epoch 00481: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0231 - val_mse: 0.0231 - val_mae: 0.1234 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 482/500
Epoch 00482: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0532 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1235 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 483/500
Epoch 00483: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0232 - val_mse: 0.0232 - val_mae: 0.1235 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 484/500
Epoch 00484: val_loss did not improve from 0.02295
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0551 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1229 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 485/500
Epoch 00485: val_loss improved from 0.02295 to 0.02291, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0539 - val_loss: 0.0229 - val_mse: 0.0229 - val_mae: 0.1228 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
Epoch 486/500
Epoch 00486: val_loss did not improve from 0.02291
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.0230 - val_mse: 0.0230 - val_mae: 0.1231 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 487/500
Epoch 00487: val_loss improved from 0.02291 to 0.02280, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0526 - val_loss: 0.0228 - val_mse: 0.0228 - val_mae: 0.1224 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 488/500
Epoch 00488: val_loss improved from 0.02280 to 0.02264, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0555 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1219 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 489/500
Epoch 00489: val_loss improved from 0.02264 to 0.02257, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1217 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 490/500
Epoch 00490: val_loss did not improve from 0.02257
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0536 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1218 - lr: 1.0000e-05 - 88ms/epoch - 6ms/step
Epoch 491/500
Epoch 00491: val_loss did not improve from 0.02257
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0536 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1218 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 492/500
Epoch 00492: val_loss did not improve from 0.02257
16/16 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0493 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1220 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 493/500
Epoch 00493: val_loss did not improve from 0.02257
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0521 - val_loss: 0.0226 - val_mse: 0.0226 - val_mae: 0.1220 - lr: 1.0000e-05 - 79ms/epoch - 5ms/step
Epoch 494/500
Epoch 00494: val_loss improved from 0.02257 to 0.02240, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0526 - val_loss: 0.0224 - val_mse: 0.0224 - val_mae: 0.1212 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 495/500
Epoch 00495: val_loss improved from 0.02240 to 0.02230, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0542 - val_loss: 0.0223 - val_mse: 0.0223 - val_mae: 0.1209 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 496/500
Epoch 00496: val_loss improved from 0.02230 to 0.02204, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1201 - lr: 1.0000e-05 - 104ms/epoch - 7ms/step
Epoch 497/500
Epoch 00497: val_loss did not improve from 0.02204
16/16 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1202 - lr: 1.0000e-05 - 81ms/epoch - 5ms/step
Epoch 498/500
Epoch 00498: val_loss did not improve from 0.02204
16/16 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0532 - val_loss: 0.0221 - val_mse: 0.0221 - val_mae: 0.1203 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 499/500
Epoch 00499: val_loss improved from 0.02204 to 0.02197, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0548 - val_loss: 0.0220 - val_mse: 0.0220 - val_mae: 0.1199 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 500/500
Epoch 00500: val_loss improved from 0.02197 to 0.02183, saving model to LSTM7.h5
16/16 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.0218 - val_mse: 0.0218 - val_mae: 0.1195 - lr: 1.0000e-05 - 117ms/epoch - 7ms/step
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 45.539825469272486
RMSE: 6.748320196113436
MAPE: 5.43245952292463
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.53 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.34 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14597.576, Time=5.56 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=8.44 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15338.693, Time=10.98 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15153.472, Time=25.66 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17112.658, Time=15.33 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.587, Time=10.00 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15106.216, Time=14.12 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-12251.715, Time=35.41 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 134.382 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8588.329
Date: Sun, 12 Dec 2021 AIC -17112.658
Time: 18:30:04 BIC -16962.551
Sample: 0 HQIC -17055.011
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.53e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x2 -4.512e-09 3.25e-06 -0.001 0.999 -6.38e-06 6.37e-06
x3 -4.538e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x4 1.0000 3.26e-06 3.07e+05 0.000 1.000 1.000
x5 -4.105e-09 3.11e-06 -0.001 0.999 -6.1e-06 6.09e-06
x6 -1.488e-08 5.45e-06 -0.003 0.998 -1.07e-05 1.07e-05
x7 -4.481e-09 3.24e-06 -0.001 0.999 -6.36e-06 6.36e-06
x8 -4.365e-09 3.2e-06 -0.001 0.999 -6.29e-06 6.28e-06
x9 -4.628e-10 8.38e-07 -0.001 1.000 -1.64e-06 1.64e-06
x10 -7.326e-10 1.3e-06 -0.001 1.000 -2.55e-06 2.54e-06
x11 -4.347e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x12 -4.345e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x13 -4.52e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x14 -3.586e-08 9e-06 -0.004 0.997 -1.77e-05 1.76e-05
x15 -3.757e-09 2.98e-06 -0.001 0.999 -5.84e-06 5.83e-06
x16 -1.24e-08 5.36e-06 -0.002 0.998 -1.05e-05 1.05e-05
x17 -4.515e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x18 -2.632e-10 7.07e-07 -0.000 1.000 -1.39e-06 1.39e-06
x19 -4.642e-09 3.3e-06 -0.001 0.999 -6.47e-06 6.46e-06
x20 -3.919e-10 6.91e-07 -0.001 1.000 -1.36e-06 1.35e-06
x21 -7.69e-09 4.13e-06 -0.002 0.999 -8.11e-06 8.09e-06
x22 -6.998e-12 2.69e-13 -25.970 0.000 -7.53e-12 -6.47e-12
x23 -1.81e-10 2.22e-12 -81.582 0.000 -1.85e-10 -1.77e-10
x24 -4.955e-08 8.9e-06 -0.006 0.996 -1.75e-05 1.74e-05
x25 -4.901e-08 8.4e-06 -0.006 0.995 -1.65e-05 1.64e-05
x26 -6.446e-08 1.2e-05 -0.005 0.996 -2.37e-05 2.35e-05
x27 -5.73e-08 1.14e-05 -0.005 0.996 -2.24e-05 2.23e-05
x28 -2.997e-08 8.22e-06 -0.004 0.997 -1.61e-05 1.61e-05
x29 -3.486e-08 8.89e-06 -0.004 0.997 -1.75e-05 1.74e-05
ma.L1 -1.3902 3.62e-10 -3.84e+09 0.000 -1.390 -1.390
ma.L2 0.4033 3.72e-10 1.08e+09 0.000 0.403 0.403
sigma2 8.541e-11 6.95e-11 1.229 0.219 -5.08e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 66.92 Jarque-Bera (JB): 6039240.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.14
Prob(H) (two-sided): 0.00 Kurtosis: 426.63
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.94e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.30534, saving model to LSTM7.h5
17/17 - 2s - loss: 0.1565 - mse: 0.1565 - mae: 0.3274 - val_loss: 0.3053 - val_mse: 0.3053 - val_mae: 0.5221 - lr: 0.0010 - 2s/epoch - 125ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.30534 to 0.25749, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0395 - mse: 0.0395 - mae: 0.1562 - val_loss: 0.2575 - val_mse: 0.2575 - val_mae: 0.4783 - lr: 0.0010 - 105ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.25749 to 0.23086, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0299 - mse: 0.0299 - mae: 0.1376 - val_loss: 0.2309 - val_mse: 0.2309 - val_mae: 0.4531 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.23086 to 0.22231, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0298 - mse: 0.0298 - mae: 0.1382 - val_loss: 0.2223 - val_mse: 0.2223 - val_mae: 0.4443 - lr: 0.0010 - 93ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.22231 to 0.19973, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0242 - mse: 0.0242 - mae: 0.1241 - val_loss: 0.1997 - val_mse: 0.1997 - val_mae: 0.4202 - lr: 0.0010 - 91ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.19973 to 0.17981, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0206 - mse: 0.0206 - mae: 0.1142 - val_loss: 0.1798 - val_mse: 0.1798 - val_mae: 0.3972 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.17981 to 0.15455, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0186 - mse: 0.0186 - mae: 0.1092 - val_loss: 0.1545 - val_mse: 0.1545 - val_mae: 0.3666 - lr: 0.0010 - 97ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.15455
17/17 - 0s - loss: 0.0159 - mse: 0.0159 - mae: 0.1014 - val_loss: 0.1656 - val_mse: 0.1656 - val_mae: 0.3820 - lr: 0.0010 - 93ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.15455
17/17 - 0s - loss: 0.0138 - mse: 0.0138 - mae: 0.0929 - val_loss: 0.1584 - val_mse: 0.1584 - val_mae: 0.3740 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss improved from 0.15455 to 0.13938, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0111 - mse: 0.0111 - mae: 0.0830 - val_loss: 0.1394 - val_mse: 0.1394 - val_mae: 0.3495 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.13938 to 0.13594, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0867 - val_loss: 0.1359 - val_mse: 0.1359 - val_mae: 0.3454 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.13594 to 0.11705, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0822 - val_loss: 0.1171 - val_mse: 0.1171 - val_mae: 0.3192 - lr: 0.0010 - 112ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.11705 to 0.10633, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0095 - mse: 0.0095 - mae: 0.0773 - val_loss: 0.1063 - val_mse: 0.1063 - val_mae: 0.3032 - lr: 0.0010 - 109ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss improved from 0.10633 to 0.09558, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0089 - mse: 0.0089 - mae: 0.0741 - val_loss: 0.0956 - val_mse: 0.0956 - val_mae: 0.2868 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.09558
17/17 - 0s - loss: 0.0083 - mse: 0.0083 - mae: 0.0710 - val_loss: 0.1009 - val_mse: 0.1009 - val_mae: 0.2964 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.09558 to 0.08888, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0706 - val_loss: 0.0889 - val_mse: 0.0889 - val_mae: 0.2764 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss improved from 0.08888 to 0.08816, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0681 - val_loss: 0.0882 - val_mse: 0.0882 - val_mae: 0.2755 - lr: 0.0010 - 110ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.08816
17/17 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0655 - val_loss: 0.0902 - val_mse: 0.0902 - val_mae: 0.2802 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss improved from 0.08816 to 0.07791, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0647 - val_loss: 0.0779 - val_mse: 0.0779 - val_mae: 0.2589 - lr: 0.0010 - 118ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.07791
17/17 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0663 - val_loss: 0.0814 - val_mse: 0.0814 - val_mae: 0.2659 - lr: 0.0010 - 83ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.07791
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0607 - val_loss: 0.0787 - val_mse: 0.0787 - val_mae: 0.2605 - lr: 0.0010 - 81ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.07791
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0586 - val_loss: 0.0842 - val_mse: 0.0842 - val_mae: 0.2709 - lr: 0.0010 - 90ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss improved from 0.07791 to 0.07391, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0603 - val_loss: 0.0739 - val_mse: 0.0739 - val_mae: 0.2531 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.07391
17/17 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0637 - val_loss: 0.0862 - val_mse: 0.0862 - val_mae: 0.2757 - lr: 0.0010 - 100ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.07391
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0755 - val_mse: 0.0755 - val_mae: 0.2557 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss improved from 0.07391 to 0.06662, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0541 - val_loss: 0.0666 - val_mse: 0.0666 - val_mae: 0.2391 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.06662
17/17 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0557 - val_loss: 0.0728 - val_mse: 0.0728 - val_mae: 0.2523 - lr: 0.0010 - 96ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.06662 to 0.06080, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0578 - val_loss: 0.0608 - val_mse: 0.0608 - val_mae: 0.2289 - lr: 0.0010 - 102ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.06080
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0564 - val_loss: 0.0626 - val_mse: 0.0626 - val_mae: 0.2327 - lr: 0.0010 - 80ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.06080
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0635 - val_mse: 0.0635 - val_mae: 0.2345 - lr: 0.0010 - 86ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.06080 to 0.04957, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0525 - val_loss: 0.0496 - val_mse: 0.0496 - val_mae: 0.2046 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04957
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0573 - val_loss: 0.0550 - val_mse: 0.0550 - val_mae: 0.2182 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss improved from 0.04957 to 0.04416, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0575 - val_loss: 0.0442 - val_mse: 0.0442 - val_mae: 0.1929 - lr: 0.0010 - 119ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04416
17/17 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0596 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2105 - lr: 0.0010 - 106ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04416
17/17 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0511 - val_loss: 0.0574 - val_mse: 0.0574 - val_mae: 0.2232 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04416
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0517 - val_mse: 0.0517 - val_mae: 0.2116 - lr: 0.0010 - 85ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04416
17/17 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0576 - val_loss: 0.0497 - val_mse: 0.0497 - val_mae: 0.2078 - lr: 0.0010 - 89ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss improved from 0.04416 to 0.03520, saving model to LSTM7.h5
17/17 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0595 - val_loss: 0.0352 - val_mse: 0.0352 - val_mae: 0.1705 - lr: 0.0010 - 114ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.0544 - val_mse: 0.0544 - val_mae: 0.2172 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0551 - val_loss: 0.0683 - val_mse: 0.0683 - val_mae: 0.2465 - lr: 0.0010 - 77ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0637 - val_loss: 0.0464 - val_mse: 0.0464 - val_mae: 0.2005 - lr: 0.0010 - 84ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0608 - val_loss: 0.0416 - val_mse: 0.0416 - val_mae: 0.1889 - lr: 0.0010 - 88ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00043: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0602 - val_loss: 0.0483 - val_mse: 0.0483 - val_mae: 0.2043 - lr: 0.0010 - 95ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0501 - val_loss: 0.0502 - val_mse: 0.0502 - val_mae: 0.2088 - lr: 1.0000e-04 - 87ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0475 - val_loss: 0.0518 - val_mse: 0.0518 - val_mae: 0.2126 - lr: 1.0000e-04 - 80ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0473 - val_loss: 0.0516 - val_mse: 0.0516 - val_mae: 0.2120 - lr: 1.0000e-04 - 84ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.0504 - val_mse: 0.0504 - val_mae: 0.2096 - lr: 1.0000e-04 - 81ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00048: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0466 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.2093 - lr: 1.0000e-04 - 97ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.2092 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0445 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.2092 - lr: 1.0000e-05 - 80ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0448 - val_loss: 0.0502 - val_mse: 0.0502 - val_mae: 0.2091 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0459 - val_loss: 0.0502 - val_mse: 0.0502 - val_mae: 0.2091 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00053: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0457 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.2093 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0445 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.2092 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0449 - val_loss: 0.0502 - val_mse: 0.0502 - val_mae: 0.2092 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.0502 - val_mse: 0.0502 - val_mae: 0.2090 - lr: 1.0000e-05 - 87ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.2093 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0461 - val_loss: 0.0504 - val_mse: 0.0504 - val_mae: 0.2096 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0458 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2097 - lr: 1.0000e-05 - 108ms/epoch - 6ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0449 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2099 - lr: 1.0000e-05 - 113ms/epoch - 7ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0451 - val_loss: 0.0507 - val_mse: 0.0507 - val_mae: 0.2101 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0451 - val_loss: 0.0507 - val_mse: 0.0507 - val_mae: 0.2102 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0463 - val_loss: 0.0507 - val_mse: 0.0507 - val_mae: 0.2102 - lr: 1.0000e-05 - 86ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0463 - val_loss: 0.0507 - val_mse: 0.0507 - val_mae: 0.2102 - lr: 1.0000e-05 - 83ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0448 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2101 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0469 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2099 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.0507 - val_mse: 0.0507 - val_mae: 0.2102 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0466 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2098 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0453 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2098 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0445 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0462 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 73/500
Epoch 00073: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0443 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2098 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2101 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0458 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2098 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0452 - val_loss: 0.0504 - val_mse: 0.0504 - val_mae: 0.2097 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0504 - val_mse: 0.0504 - val_mae: 0.2096 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0459 - val_loss: 0.0504 - val_mse: 0.0504 - val_mae: 0.2096 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0449 - val_loss: 0.0504 - val_mse: 0.0504 - val_mae: 0.2097 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 81/500
Epoch 00081: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0464 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2097 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0464 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 84ms/epoch - 5ms/step
Epoch 84/500
Epoch 00084: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0431 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2102 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 85/500
Epoch 00085: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0443 - val_loss: 0.0506 - val_mse: 0.0506 - val_mae: 0.2100 - lr: 1.0000e-05 - 93ms/epoch - 5ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0463 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2099 - lr: 1.0000e-05 - 82ms/epoch - 5ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0464 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2099 - lr: 1.0000e-05 - 91ms/epoch - 5ms/step
Epoch 88/500
Epoch 00088: val_loss did not improve from 0.03520
17/17 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0452 - val_loss: 0.0505 - val_mse: 0.0505 - val_mae: 0.2100 - lr: 1.0000e-05 - 85ms/epoch - 5ms/step
Epoch 00088: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 45.539825469272486
RMSE: 6.748320196113436
MAPE: 5.43245952292463
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 42.30488040231578
RMSE: 6.504220199402522
MAPE: 5.010195929360332
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.776, Time=3.29 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.586, Time=5.34 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16271.755, Time=7.16 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.586, Time=8.20 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15152.908, Time=10.90 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14481.105, Time=12.97 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16088.109, Time=20.81 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.021, Time=6.25 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.615, Time=3.78 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=7.55 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=18.60 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.981, Time=4.57 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.666, Time=4.55 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 114.014 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 18:36:11 BIC -16911.965
Sample: 0 HQIC -17010.203
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 6.02e-05 -4.65e-06 1.000 -0.000 0.000
x2 -2.817e-10 6.04e-05 -4.66e-06 1.000 -0.000 0.000
x3 -2.805e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x4 1.0000 6.03e-05 1.66e+04 0.000 1.000 1.000
x5 -2.6e-10 5.8e-05 -4.48e-06 1.000 -0.000 0.000
x6 -1.389e-09 0.000 -1.08e-05 1.000 -0.000 0.000
x7 -2.789e-10 6.01e-05 -4.64e-06 1.000 -0.000 0.000
x8 -2.763e-10 5.99e-05 -4.62e-06 1.000 -0.000 0.000
x9 -2.224e-12 1.6e-06 -1.39e-06 1.000 -3.13e-06 3.13e-06
x10 -1.345e-10 4.12e-05 -3.26e-06 1.000 -8.08e-05 8.08e-05
x11 -2.9e-10 6.12e-05 -4.74e-06 1.000 -0.000 0.000
x12 -2.602e-10 5.82e-05 -4.47e-06 1.000 -0.000 0.000
x13 -2.807e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x14 -1.87e-09 0.000 -1.2e-05 1.000 -0.000 0.000
x15 -2.844e-10 6.05e-05 -4.7e-06 1.000 -0.000 0.000
x16 -7.962e-11 3.2e-05 -2.48e-06 1.000 -6.28e-05 6.28e-05
x17 -2.445e-10 5.61e-05 -4.36e-06 1.000 -0.000 0.000
x18 -6.4e-10 9.15e-05 -6.99e-06 1.000 -0.000 0.000
x19 -2.923e-10 6.14e-05 -4.76e-06 1.000 -0.000 0.000
x20 -4.336e-10 7.41e-05 -5.86e-06 1.000 -0.000 0.000
x21 -4.55e-10 7.5e-05 -6.07e-06 1.000 -0.000 0.000
x22 -3.587e-13 1.42e-11 -0.025 0.980 -2.82e-11 2.75e-11
x23 -1.088e-11 9.56e-11 -0.114 0.909 -1.98e-10 1.76e-10
x24 -2.146e-09 0.000 -1.63e-05 1.000 -0.000 0.000
x25 -1.637e-09 0.000 -1.35e-05 1.000 -0.000 0.000
x26 -3.147e-09 0.000 -1.56e-05 1.000 -0.000 0.000
x27 -2.58e-09 0.000 -1.41e-05 1.000 -0.000 0.000
x28 -2.444e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x29 -1.666e-09 0.000 -1.13e-05 1.000 -0.000 0.000
ar.L1 -0.4923 5.1e-10 -9.65e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 2.96e-10 -6.49e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.4e-10 -3.29e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.16e-09 -6.12e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.06 Jarque-Bera (JB): 4126495.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.48
Prob(H) (two-sided): 0.00 Kurtosis: 353.58
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.85386, saving model to LSTM7.h5
10/10 - 2s - loss: 0.7346 - mse: 0.7346 - mae: 0.7356 - val_loss: 0.8539 - val_mse: 0.8539 - val_mae: 0.9053 - lr: 0.0010 - 2s/epoch - 218ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.85386 to 0.57893, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0770 - mse: 0.0770 - mae: 0.2318 - val_loss: 0.5789 - val_mse: 0.5789 - val_mae: 0.7425 - lr: 0.0010 - 75ms/epoch - 8ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.57893 to 0.40598, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0758 - mse: 0.0758 - mae: 0.2373 - val_loss: 0.4060 - val_mse: 0.4060 - val_mae: 0.6181 - lr: 0.0010 - 92ms/epoch - 9ms/step
Epoch 4/500
Epoch 00004: val_loss improved from 0.40598 to 0.29908, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0418 - mse: 0.0418 - mae: 0.1718 - val_loss: 0.2991 - val_mse: 0.2991 - val_mae: 0.5267 - lr: 0.0010 - 95ms/epoch - 9ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.29908 to 0.23737, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0252 - mse: 0.0252 - mae: 0.1260 - val_loss: 0.2374 - val_mse: 0.2374 - val_mae: 0.4658 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.23737 to 0.20982, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0257 - mse: 0.0257 - mae: 0.1275 - val_loss: 0.2098 - val_mse: 0.2098 - val_mae: 0.4361 - lr: 0.0010 - 117ms/epoch - 12ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.20982 to 0.18859, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0218 - mse: 0.0218 - mae: 0.1176 - val_loss: 0.1886 - val_mse: 0.1886 - val_mae: 0.4119 - lr: 0.0010 - 92ms/epoch - 9ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.18859 to 0.17341, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0192 - mse: 0.0192 - mae: 0.1121 - val_loss: 0.1734 - val_mse: 0.1734 - val_mae: 0.3936 - lr: 0.0010 - 101ms/epoch - 10ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.17341 to 0.16995, saving model to LSTM7.h5
10/10 - 0s - loss: 0.0199 - mse: 0.0199 - mae: 0.1131 - val_loss: 0.1699 - val_mse: 0.1699 - val_mae: 0.3894 - lr: 0.0010 - 97ms/epoch - 10ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0174 - mse: 0.0174 - mae: 0.1035 - val_loss: 0.1730 - val_mse: 0.1730 - val_mae: 0.3934 - lr: 0.0010 - 60ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.1036 - val_loss: 0.1854 - val_mse: 0.1854 - val_mae: 0.4085 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0156 - mse: 0.0156 - mae: 0.0984 - val_loss: 0.1808 - val_mse: 0.1808 - val_mae: 0.4030 - lr: 0.0010 - 73ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0937 - val_loss: 0.1780 - val_mse: 0.1780 - val_mae: 0.3994 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 14/500
Epoch 00014: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00014: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0146 - mse: 0.0146 - mae: 0.0952 - val_loss: 0.1813 - val_mse: 0.1813 - val_mae: 0.4035 - lr: 0.0010 - 67ms/epoch - 7ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0942 - val_loss: 0.1807 - val_mse: 0.1807 - val_mae: 0.4028 - lr: 1.0000e-04 - 77ms/epoch - 8ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0947 - val_loss: 0.1810 - val_mse: 0.1810 - val_mae: 0.4031 - lr: 1.0000e-04 - 76ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0924 - val_loss: 0.1803 - val_mse: 0.1803 - val_mae: 0.4024 - lr: 1.0000e-04 - 83ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0895 - val_loss: 0.1790 - val_mse: 0.1790 - val_mae: 0.4007 - lr: 1.0000e-04 - 86ms/epoch - 9ms/step
Epoch 19/500
Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00019: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0911 - val_loss: 0.1779 - val_mse: 0.1779 - val_mae: 0.3994 - lr: 1.0000e-04 - 63ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0133 - mse: 0.0133 - mae: 0.0925 - val_loss: 0.1777 - val_mse: 0.1777 - val_mae: 0.3991 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0892 - val_loss: 0.1774 - val_mse: 0.1774 - val_mae: 0.3988 - lr: 1.0000e-05 - 60ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0925 - val_loss: 0.1773 - val_mse: 0.1773 - val_mae: 0.3986 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0142 - mse: 0.0142 - mae: 0.0963 - val_loss: 0.1770 - val_mse: 0.1770 - val_mae: 0.3983 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 24/500
Epoch 00024: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00024: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0896 - val_loss: 0.1768 - val_mse: 0.1768 - val_mae: 0.3980 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0123 - mse: 0.0123 - mae: 0.0889 - val_loss: 0.1767 - val_mse: 0.1767 - val_mae: 0.3979 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0880 - val_loss: 0.1766 - val_mse: 0.1766 - val_mae: 0.3978 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0116 - mse: 0.0116 - mae: 0.0855 - val_loss: 0.1765 - val_mse: 0.1765 - val_mae: 0.3977 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0897 - val_loss: 0.1763 - val_mse: 0.1763 - val_mae: 0.3974 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0880 - val_loss: 0.1762 - val_mse: 0.1762 - val_mae: 0.3972 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0908 - val_loss: 0.1760 - val_mse: 0.1760 - val_mae: 0.3970 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0895 - val_loss: 0.1758 - val_mse: 0.1758 - val_mae: 0.3968 - lr: 1.0000e-05 - 58ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0891 - val_loss: 0.1756 - val_mse: 0.1756 - val_mae: 0.3966 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0908 - val_loss: 0.1755 - val_mse: 0.1755 - val_mae: 0.3965 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0942 - val_loss: 0.1754 - val_mse: 0.1754 - val_mae: 0.3963 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0905 - val_loss: 0.1753 - val_mse: 0.1753 - val_mae: 0.3962 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0125 - mse: 0.0125 - mae: 0.0882 - val_loss: 0.1753 - val_mse: 0.1753 - val_mae: 0.3961 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0122 - mse: 0.0122 - mae: 0.0883 - val_loss: 0.1751 - val_mse: 0.1751 - val_mae: 0.3959 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0875 - val_loss: 0.1749 - val_mse: 0.1749 - val_mae: 0.3957 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0911 - val_loss: 0.1747 - val_mse: 0.1747 - val_mae: 0.3954 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0133 - mse: 0.0133 - mae: 0.0913 - val_loss: 0.1744 - val_mse: 0.1744 - val_mae: 0.3951 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0887 - val_loss: 0.1742 - val_mse: 0.1742 - val_mae: 0.3949 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0117 - mse: 0.0117 - mae: 0.0859 - val_loss: 0.1742 - val_mse: 0.1742 - val_mae: 0.3948 - lr: 1.0000e-05 - 82ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0927 - val_loss: 0.1742 - val_mse: 0.1742 - val_mae: 0.3949 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0136 - mse: 0.0136 - mae: 0.0925 - val_loss: 0.1742 - val_mse: 0.1742 - val_mae: 0.3948 - lr: 1.0000e-05 - 92ms/epoch - 9ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0135 - mse: 0.0135 - mae: 0.0925 - val_loss: 0.1742 - val_mse: 0.1742 - val_mae: 0.3948 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0897 - val_loss: 0.1740 - val_mse: 0.1740 - val_mae: 0.3946 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0143 - mse: 0.0143 - mae: 0.0941 - val_loss: 0.1740 - val_mse: 0.1740 - val_mae: 0.3945 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0129 - mse: 0.0129 - mae: 0.0911 - val_loss: 0.1740 - val_mse: 0.1740 - val_mae: 0.3946 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0901 - val_loss: 0.1740 - val_mse: 0.1740 - val_mae: 0.3946 - lr: 1.0000e-05 - 63ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0124 - mse: 0.0124 - mae: 0.0886 - val_loss: 0.1741 - val_mse: 0.1741 - val_mae: 0.3947 - lr: 1.0000e-05 - 57ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0128 - mse: 0.0128 - mae: 0.0892 - val_loss: 0.1739 - val_mse: 0.1739 - val_mae: 0.3944 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0903 - val_loss: 0.1734 - val_mse: 0.1734 - val_mae: 0.3939 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0905 - val_loss: 0.1731 - val_mse: 0.1731 - val_mae: 0.3934 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0888 - val_loss: 0.1727 - val_mse: 0.1727 - val_mae: 0.3930 - lr: 1.0000e-05 - 85ms/epoch - 9ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0902 - val_loss: 0.1728 - val_mse: 0.1728 - val_mae: 0.3930 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0137 - mse: 0.0137 - mae: 0.0907 - val_loss: 0.1727 - val_mse: 0.1727 - val_mae: 0.3930 - lr: 1.0000e-05 - 62ms/epoch - 6ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0131 - mse: 0.0131 - mae: 0.0901 - val_loss: 0.1725 - val_mse: 0.1725 - val_mae: 0.3927 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0130 - mse: 0.0130 - mae: 0.0907 - val_loss: 0.1726 - val_mse: 0.1726 - val_mae: 0.3928 - lr: 1.0000e-05 - 61ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.16995
10/10 - 0s - loss: 0.0132 - mse: 0.0132 - mae: 0.0906 - val_loss: 0.1724 - val_mse: 0.1724 - val_mae: 0.3926 - lr: 1.0000e-05 - 56ms/epoch - 6ms/step
Epoch 00059: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 45.539825469272486
RMSE: 6.748320196113436
MAPE: 5.43245952292463
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 42.30488040231578
RMSE: 6.504220199402522
MAPE: 5.010195929360332
DEMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 54.48% Accuracy
MSE: 23.305922116020078
RMSE: 4.827620751055335
MAPE: 3.7452201197397774
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.104, Time=3.79 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.591, Time=5.51 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16779.655, Time=11.07 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.590, Time=8.68 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16989.430, Time=4.19 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16990.286, Time=3.92 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.543, Time=4.21 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-16987.154, Time=4.39 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-16533.935, Time=16.56 sec
Best model: ARIMA(2,3,0)(0,0,0)[0]
Total fit time: 62.346 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(2, 3, 0) Log Likelihood 8527.143
Date: Sun, 12 Dec 2021 AIC -16990.286
Time: 18:46:16 BIC -16840.179
Sample: 0 HQIC -16932.639
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.1e-16 nan nan nan nan nan
x2 -3.811e-16 -0 inf 0.000 -3.81e-16 -3.81e-16
x3 8.776e-16 4.38e-27 2e+11 0.000 8.78e-16 8.78e-16
x4 1.0000 4.36e-27 2.29e+26 0.000 1.000 1.000
x5 6.686e-16 4.14e-27 1.61e+11 0.000 6.69e-16 6.69e-16
x6 -5.238e-17 9.44e-27 -5.55e+09 0.000 -5.24e-17 -5.24e-17
x7 -1.709e-16 4.37e-27 -3.91e+10 0.000 -1.71e-16 -1.71e-16
x8 1.439e-15 4.33e-27 3.32e+11 0.000 1.44e-15 1.44e-15
x9 -2.924e-16 5.73e-28 -5.1e+11 0.000 -2.92e-16 -2.92e-16
x10 -1.028e-16 1.78e-27 -5.76e+10 0.000 -1.03e-16 -1.03e-16
x11 -4.338e-16 4.31e-27 -1.01e+11 0.000 -4.34e-16 -4.34e-16
x12 1.72e-16 4.33e-27 3.97e+10 0.000 1.72e-16 1.72e-16
x13 -3.011e-16 4.36e-27 -6.91e+10 0.000 -3.01e-16 -3.01e-16
x14 -2.611e-16 1.27e-26 -2.06e+10 0.000 -2.61e-16 -2.61e-16
x15 1.53e-14 4.46e-27 3.43e+12 0.000 1.53e-14 1.53e-14
x16 -1.401e-14 5.45e-27 -2.57e+12 0.000 -1.4e-14 -1.4e-14
x17 2.316e-14 4.12e-27 5.62e+12 0.000 2.32e-14 2.32e-14
x18 -3.727e-15 3.71e-27 -1.01e+12 0.000 -3.73e-15 -3.73e-15
x19 -1.361e-14 4.94e-27 -2.75e+12 0.000 -1.36e-14 -1.36e-14
x20 -5.277e-15 6.08e-27 -8.68e+11 0.000 -5.28e-15 -5.28e-15
x21 1.178e-18 3.12e-27 3.77e+08 0.000 1.18e-18 1.18e-18
x22 -8.779e-17 1.74e-29 -5.05e+12 0.000 -8.78e-17 -8.78e-17
x23 3.183e-17 5.91e-29 5.39e+11 0.000 3.18e-17 3.18e-17
x24 -1.683e-16 1.41e-26 -1.19e+10 0.000 -1.68e-16 -1.68e-16
x25 8.988e-17 1.48e-30 6.08e+13 0.000 8.99e-17 8.99e-17
x26 4.435e-17 1.58e-26 2.8e+09 0.000 4.44e-17 4.44e-17
x27 1.538e-16 8.87e-27 1.73e+10 0.000 1.54e-16 1.54e-16
x28 1.635e-16 1.22e-26 1.34e+10 0.000 1.63e-16 1.63e-16
x29 1.474e-16 6.34e-27 2.33e+10 0.000 1.47e-16 1.47e-16
ar.L1 -0.9879 1.21e-22 -8.16e+21 0.000 -0.988 -0.988
ar.L2 -0.4879 1.29e-22 -3.79e+21 0.000 -0.488 -0.488
sigma2 1e-10 6.99e-11 1.432 0.152 -3.69e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 57.29 Jarque-Bera (JB): 559955.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.13 Skew: 0.64
Prob(H) (two-sided): 0.00 Kurtosis: 132.20
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number inf. Standard errors may be unstable.
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/mlemodel.py:2968: RuntimeWarning: divide by zero encountered in true_divide return self.params / self.bse
ARIMA order: (2, 3, 0) Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.45223, saving model to LSTM7.h5 45/45 - 3s - loss: 0.2336 - mse: 0.2336 - mae: 0.3737 - val_loss: 0.4522 - val_mse: 0.4522 - val_mae: 0.6470 - lr: 0.0010 - 3s/epoch - 62ms/step Epoch 2/500 Epoch 00002: val_loss improved from 0.45223 to 0.17852, saving model to LSTM7.h5 45/45 - 0s - loss: 0.0456 - mse: 0.0456 - mae: 0.1706 - val_loss: 0.1785 - val_mse: 0.1785 - val_mae: 0.3874 - lr: 0.0010 - 226ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss improved from 0.17852 to 0.11256, saving model to LSTM7.h5 45/45 - 0s - loss: 0.0233 - mse: 0.0233 - mae: 0.1223 - val_loss: 0.1126 - val_mse: 0.1126 - val_mae: 0.2916 - lr: 0.0010 - 275ms/epoch - 6ms/step Epoch 4/500 Epoch 00004: val_loss improved from 0.11256 to 0.10893, saving model to LSTM7.h5 45/45 - 0s - loss: 0.0170 - mse: 0.0170 - mae: 0.1034 - val_loss: 0.1089 - val_mse: 0.1089 - val_mae: 0.2845 - lr: 0.0010 - 216ms/epoch - 5ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0126 - mse: 0.0126 - mae: 0.0898 - val_loss: 0.1238 - val_mse: 0.1238 - val_mae: 0.3087 - lr: 0.0010 - 193ms/epoch - 4ms/step Epoch 6/500 Epoch 00006: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0102 - mse: 0.0102 - mae: 0.0806 - val_loss: 0.1288 - val_mse: 0.1288 - val_mae: 0.3160 - lr: 0.0010 - 205ms/epoch - 5ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0119 - mse: 0.0119 - mae: 0.0866 - val_loss: 0.1292 - val_mse: 0.1292 - val_mae: 0.3164 - lr: 0.0010 - 274ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0106 - mse: 0.0106 - mae: 0.0807 - val_loss: 0.1173 - val_mse: 0.1173 - val_mae: 0.2971 - lr: 0.0010 - 203ms/epoch - 5ms/step Epoch 9/500 Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00009: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0097 - mse: 0.0097 - mae: 0.0784 - val_loss: 0.1345 - val_mse: 0.1345 - val_mae: 0.3226 - lr: 0.0010 - 184ms/epoch - 4ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0733 - val_loss: 0.1298 - val_mse: 0.1298 - val_mae: 0.3158 - lr: 1.0000e-04 - 198ms/epoch - 4ms/step Epoch 11/500 Epoch 00011: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0703 - val_loss: 0.1264 - val_mse: 0.1264 - val_mae: 0.3108 - lr: 1.0000e-04 - 205ms/epoch - 5ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0090 - mse: 0.0090 - mae: 0.0736 - val_loss: 0.1259 - val_mse: 0.1259 - val_mae: 0.3097 - lr: 1.0000e-04 - 246ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0679 - val_loss: 0.1230 - val_mse: 0.1230 - val_mae: 0.3049 - lr: 1.0000e-04 - 265ms/epoch - 6ms/step Epoch 14/500 Epoch 00014: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00014: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0082 - mse: 0.0082 - mae: 0.0716 - val_loss: 0.1208 - val_mse: 0.1208 - val_mae: 0.3013 - lr: 1.0000e-04 - 189ms/epoch - 4ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0717 - val_loss: 0.1208 - val_mse: 0.1208 - val_mae: 0.3012 - lr: 1.0000e-05 - 263ms/epoch - 6ms/step Epoch 16/500 Epoch 00016: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0077 - mse: 0.0077 - mae: 0.0683 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3009 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0689 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3005 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0660 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3010 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step Epoch 19/500 Epoch 00019: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00019: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0694 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3009 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0670 - val_loss: 0.1208 - val_mse: 0.1208 - val_mae: 0.3012 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0656 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3007 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0671 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3009 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0670 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3009 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0079 - mse: 0.0079 - mae: 0.0696 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3008 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0686 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3009 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0668 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3005 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0682 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3007 - lr: 1.0000e-05 - 195ms/epoch - 4ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0685 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3006 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0672 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3003 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0666 - val_loss: 0.1204 - val_mse: 0.1204 - val_mae: 0.3003 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0659 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3003 - lr: 1.0000e-05 - 202ms/epoch - 4ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0675 - val_loss: 0.1208 - val_mse: 0.1208 - val_mae: 0.3008 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0665 - val_loss: 0.1211 - val_mse: 0.1211 - val_mae: 0.3013 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0665 - val_loss: 0.1213 - val_mse: 0.1213 - val_mae: 0.3016 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0686 - val_loss: 0.1213 - val_mse: 0.1213 - val_mae: 0.3016 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0080 - mse: 0.0080 - mae: 0.0689 - val_loss: 0.1211 - val_mse: 0.1211 - val_mae: 0.3013 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0644 - val_loss: 0.1210 - val_mse: 0.1210 - val_mae: 0.3010 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0680 - val_loss: 0.1209 - val_mse: 0.1209 - val_mae: 0.3009 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0646 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3003 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0693 - val_loss: 0.1205 - val_mse: 0.1205 - val_mae: 0.3003 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0070 - mse: 0.0070 - mae: 0.0672 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3004 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0649 - val_loss: 0.1207 - val_mse: 0.1207 - val_mae: 0.3005 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0672 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3003 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0678 - val_loss: 0.1206 - val_mse: 0.1206 - val_mae: 0.3003 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0648 - val_loss: 0.1209 - val_mse: 0.1209 - val_mae: 0.3008 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0665 - val_loss: 0.1209 - val_mse: 0.1209 - val_mae: 0.3006 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0657 - val_loss: 0.1213 - val_mse: 0.1213 - val_mae: 0.3012 - lr: 1.0000e-05 - 244ms/epoch - 5ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0609 - val_loss: 0.1219 - val_mse: 0.1219 - val_mae: 0.3022 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0072 - mse: 0.0072 - mae: 0.0660 - val_loss: 0.1219 - val_mse: 0.1219 - val_mae: 0.3022 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0656 - val_loss: 0.1218 - val_mse: 0.1218 - val_mae: 0.3020 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0673 - val_loss: 0.1221 - val_mse: 0.1221 - val_mae: 0.3025 - lr: 1.0000e-05 - 247ms/epoch - 5ms/step Epoch 52/500 Epoch 00052: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0066 - mse: 0.0066 - mae: 0.0633 - val_loss: 0.1218 - val_mse: 0.1218 - val_mae: 0.3020 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step Epoch 53/500 Epoch 00053: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0623 - val_loss: 0.1214 - val_mse: 0.1214 - val_mae: 0.3013 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step Epoch 54/500 Epoch 00054: val_loss did not improve from 0.10893 45/45 - 0s - loss: 0.0071 - mse: 0.0071 - mae: 0.0657 - val_loss: 0.1210 - val_mse: 0.1210 - val_mae: 0.3007 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step Epoch 00054: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 45.539825469272486
RMSE: 6.748320196113436
MAPE: 5.43245952292463
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 42.30488040231578
RMSE: 6.504220199402522
MAPE: 5.010195929360332
DEMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 54.48% Accuracy
MSE: 23.305922116020078
RMSE: 4.827620751055335
MAPE: 3.7452201197397774
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 18.082341646298453
RMSE: 4.252333670621163
MAPE: 3.4333194517527637
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.238, Time=3.57 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.578, Time=5.49 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16746.296, Time=8.42 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.578, Time=8.49 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16987.591, Time=3.66 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16395.520, Time=12.87 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17063.555, Time=12.59 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.578, Time=10.44 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16082.554, Time=20.02 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-15249.608, Time=19.18 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 104.740 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8563.778
Date: Sun, 12 Dec 2021 AIC -17063.555
Time: 18:49:53 BIC -16913.448
Sample: 0 HQIC -17005.908
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.495e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x2 -1.485e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x3 -1.518e-10 0.000 -1.21e-06 1.000 -0.000 0.000
x4 1.0000 0.000 8075.329 0.000 1.000 1.000
x5 -1.356e-10 0.000 -1.15e-06 1.000 -0.000 0.000
x6 -2.861e-09 0.000 -2.38e-05 1.000 -0.000 0.000
x7 -1.374e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x8 -1.371e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x9 -7.133e-11 7.1e-06 -1.01e-05 1.000 -1.39e-05 1.39e-05
x10 -1.23e-10 4.21e-05 -2.92e-06 1.000 -8.24e-05 8.24e-05
x11 -1.357e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x12 -1.401e-10 0.000 -1.11e-06 1.000 -0.000 0.000
x13 -1.436e-10 0.000 -1.16e-06 1.000 -0.000 0.000
x14 -1.179e-09 0.000 -3.22e-06 1.000 -0.001 0.001
x15 -1.651e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x16 -1.064e-10 0.000 -9.62e-07 1.000 -0.000 0.000
x17 -1.041e-10 0.000 -9.53e-07 1.000 -0.000 0.000
x18 -4.477e-10 0.000 -1.99e-06 1.000 -0.000 0.000
x19 -1.816e-10 0.000 -1.26e-06 1.000 -0.000 0.000
x20 -4.37e-10 0.000 -1.96e-06 1.000 -0.000 0.000
x21 -1.371e-09 9.1e-05 -1.51e-05 1.000 -0.000 0.000
x22 -1.059e-11 nan nan nan nan nan
x23 -9.902e-11 3.83e-09 -0.026 0.979 -7.61e-09 7.41e-09
x24 -5.521e-09 0.000 -1.34e-05 1.000 -0.001 0.001
x25 -4.621e-09 6.42e-05 -7.2e-05 1.000 -0.000 0.000
x26 -1.587e-09 0.000 -3.73e-06 1.000 -0.001 0.001
x27 -8.504e-10 0.000 -2.79e-06 1.000 -0.001 0.001
x28 -1.122e-09 0.000 -3.14e-06 1.000 -0.001 0.001
x29 -6.091e-10 0.000 -2.45e-06 1.000 -0.000 0.000
ma.L1 -1.3318 7.32e-07 -1.82e+06 0.000 -1.332 -1.332
ma.L2 0.3767 7.56e-07 4.98e+05 0.000 0.377 0.377
sigma2 9.093e-11 6.97e-11 1.304 0.192 -4.57e-11 2.28e-10
===================================================================================
Ljung-Box (L1) (Q): 76.00 Jarque-Bera (JB): 304933.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 1.65
Prob(H) (two-sided): 0.00 Kurtosis: 98.29
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.19e+28. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.51896, saving model to LSTM7.h5
58/58 - 2s - loss: 0.1462 - mse: 0.1462 - mae: 0.2950 - val_loss: 0.5190 - val_mse: 0.5190 - val_mae: 0.6940 - lr: 0.0010 - 2s/epoch - 40ms/step
Epoch 2/500
Epoch 00002: val_loss improved from 0.51896 to 0.03828, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0396 - mse: 0.0396 - mae: 0.1542 - val_loss: 0.0383 - val_mse: 0.0383 - val_mae: 0.1708 - lr: 0.0010 - 292ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.03828 to 0.02764, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0300 - mse: 0.0300 - mae: 0.1367 - val_loss: 0.0276 - val_mse: 0.0276 - val_mae: 0.1316 - lr: 0.0010 - 295ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.02764
58/58 - 0s - loss: 0.0183 - mse: 0.0183 - mae: 0.1075 - val_loss: 0.0306 - val_mse: 0.0306 - val_mae: 0.1346 - lr: 0.0010 - 286ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.02764 to 0.02589, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1022 - val_loss: 0.0259 - val_mse: 0.0259 - val_mae: 0.1259 - lr: 0.0010 - 322ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: val_loss improved from 0.02589 to 0.02461, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0887 - val_loss: 0.0246 - val_mse: 0.0246 - val_mae: 0.1241 - lr: 0.0010 - 277ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.02461
58/58 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0834 - val_loss: 0.0266 - val_mse: 0.0266 - val_mae: 0.1266 - lr: 0.0010 - 267ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss improved from 0.02461 to 0.02069, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0087 - mse: 0.0087 - mae: 0.0732 - val_loss: 0.0207 - val_mse: 0.0207 - val_mae: 0.1155 - lr: 0.0010 - 280ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss improved from 0.02069 to 0.01874, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0092 - mse: 0.0092 - mae: 0.0751 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1096 - lr: 0.0010 - 259ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.01874
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0713 - val_loss: 0.0188 - val_mse: 0.0188 - val_mae: 0.1103 - lr: 0.0010 - 281ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: val_loss improved from 0.01874 to 0.01718, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0652 - val_loss: 0.0172 - val_mse: 0.0172 - val_mae: 0.1075 - lr: 0.0010 - 248ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss improved from 0.01718 to 0.01694, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0073 - mse: 0.0073 - mae: 0.0669 - val_loss: 0.0169 - val_mse: 0.0169 - val_mae: 0.1040 - lr: 0.0010 - 271ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss improved from 0.01694 to 0.01646, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0587 - val_loss: 0.0165 - val_mse: 0.0165 - val_mae: 0.1074 - lr: 0.0010 - 294ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.01646
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0638 - val_loss: 0.0169 - val_mse: 0.0169 - val_mae: 0.1048 - lr: 0.0010 - 255ms/epoch - 4ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01646
58/58 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0615 - val_loss: 0.0177 - val_mse: 0.0177 - val_mae: 0.1121 - lr: 0.0010 - 253ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss improved from 0.01646 to 0.01602, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0075 - mse: 0.0075 - mae: 0.0678 - val_loss: 0.0160 - val_mse: 0.0160 - val_mae: 0.1003 - lr: 0.0010 - 307ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01602
58/58 - 0s - loss: 0.0065 - mse: 0.0065 - mae: 0.0613 - val_loss: 0.0184 - val_mse: 0.0184 - val_mae: 0.1138 - lr: 0.0010 - 252ms/epoch - 4ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.01602
58/58 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0654 - val_loss: 0.0188 - val_mse: 0.0188 - val_mae: 0.1090 - lr: 0.0010 - 264ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01602
58/58 - 0s - loss: 0.0081 - mse: 0.0081 - mae: 0.0704 - val_loss: 0.0231 - val_mse: 0.0231 - val_mae: 0.1272 - lr: 0.0010 - 260ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01602
58/58 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0665 - val_loss: 0.0172 - val_mse: 0.0172 - val_mae: 0.1047 - lr: 0.0010 - 264ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00021: val_loss did not improve from 0.01602
58/58 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0611 - val_loss: 0.0216 - val_mse: 0.0216 - val_mae: 0.1211 - lr: 0.0010 - 273ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.01602 to 0.01467, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0181 - mse: 0.0181 - mae: 0.1165 - val_loss: 0.0147 - val_mse: 0.0147 - val_mae: 0.0997 - lr: 1.0000e-04 - 250ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss improved from 0.01467 to 0.01324, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0067 - mse: 0.0067 - mae: 0.0651 - val_loss: 0.0132 - val_mse: 0.0132 - val_mae: 0.0947 - lr: 1.0000e-04 - 285ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss improved from 0.01324 to 0.01202, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0562 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0900 - lr: 1.0000e-04 - 269ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss improved from 0.01202 to 0.01156, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0563 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0880 - lr: 1.0000e-04 - 265ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss improved from 0.01156 to 0.01131, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0534 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0867 - lr: 1.0000e-04 - 242ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss improved from 0.01131 to 0.01109, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0528 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0855 - lr: 1.0000e-04 - 278ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.01109 to 0.01092, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0530 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0846 - lr: 1.0000e-04 - 293ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss improved from 0.01092 to 0.01081, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0511 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0838 - lr: 1.0000e-04 - 294ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss improved from 0.01081 to 0.01073, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0834 - lr: 1.0000e-04 - 268ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.01073 to 0.01065, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0503 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0827 - lr: 1.0000e-04 - 297ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss improved from 0.01065 to 0.01059, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0823 - lr: 1.0000e-04 - 265ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.01059
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0497 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0822 - lr: 1.0000e-04 - 265ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss improved from 0.01059 to 0.01048, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0815 - lr: 1.0000e-04 - 279ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss improved from 0.01048 to 0.01041, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0813 - lr: 1.0000e-04 - 253ms/epoch - 4ms/step
Epoch 36/500
Epoch 00036: val_loss improved from 0.01041 to 0.01031, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0494 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0812 - lr: 1.0000e-04 - 262ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss improved from 0.01031 to 0.01023, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0511 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0809 - lr: 1.0000e-04 - 256ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.01023
58/58 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0489 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0805 - lr: 1.0000e-04 - 286ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss improved from 0.01023 to 0.01012, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0478 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0803 - lr: 1.0000e-04 - 283ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss improved from 0.01012 to 0.01011, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0482 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0800 - lr: 1.0000e-04 - 288ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss improved from 0.01011 to 0.01007, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0476 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0800 - lr: 1.0000e-04 - 306ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss improved from 0.01007 to 0.01004, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0495 - val_loss: 0.0100 - val_mse: 0.0100 - val_mae: 0.0800 - lr: 1.0000e-04 - 313ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss improved from 0.01004 to 0.00994, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0475 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0796 - lr: 1.0000e-04 - 269ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss improved from 0.00994 to 0.00990, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0473 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0795 - lr: 1.0000e-04 - 311ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.00990
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0487 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0792 - lr: 1.0000e-04 - 251ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss improved from 0.00990 to 0.00986, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0466 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0789 - lr: 1.0000e-04 - 281ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss improved from 0.00986 to 0.00970, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0461 - val_loss: 0.0097 - val_mse: 0.0097 - val_mae: 0.0786 - lr: 1.0000e-04 - 253ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss improved from 0.00970 to 0.00965, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0470 - val_loss: 0.0096 - val_mse: 0.0096 - val_mae: 0.0785 - lr: 1.0000e-04 - 324ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss improved from 0.00965 to 0.00950, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0462 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0781 - lr: 1.0000e-04 - 250ms/epoch - 4ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.00950
58/58 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0471 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-04 - 270ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss improved from 0.00950 to 0.00950, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0455 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0780 - lr: 1.0000e-04 - 283ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.00950
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0784 - lr: 1.0000e-04 - 282ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss improved from 0.00950 to 0.00947, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0450 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0777 - lr: 1.0000e-04 - 247ms/epoch - 4ms/step
Epoch 54/500
Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00054: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0460 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0774 - lr: 1.0000e-04 - 237ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0458 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0775 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0428 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0776 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0457 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0777 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0456 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0777 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 59/500
Epoch 00059: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00059: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0454 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0777 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 60/500
Epoch 00060: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0453 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0777 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 61/500
Epoch 00061: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0434 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0777 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 62/500
Epoch 00062: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0441 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 63/500
Epoch 00063: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0460 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0449 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0445 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0445 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 234ms/epoch - 4ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0464 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0443 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.00947
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0443 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 72/500
Epoch 00072: val_loss improved from 0.00947 to 0.00946, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0433 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 73/500
Epoch 00073: val_loss improved from 0.00946 to 0.00945, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0436 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0454 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0450 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0780 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 76/500
Epoch 00076: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0454 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0780 - lr: 1.0000e-05 - 229ms/epoch - 4ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0425 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0457 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0780 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 79/500
Epoch 00079: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0431 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0783 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 80/500
Epoch 00080: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0446 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0780 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 81/500
Epoch 00081: val_loss improved from 0.00945 to 0.00945, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0437 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0778 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 82/500
Epoch 00082: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0455 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0778 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 83/500
Epoch 00083: val_loss did not improve from 0.00945
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0447 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0779 - lr: 1.0000e-05 - 235ms/epoch - 4ms/step
Epoch 84/500
Epoch 00084: val_loss improved from 0.00945 to 0.00944, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0456 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0777 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 85/500
Epoch 00085: val_loss improved from 0.00944 to 0.00944, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0433 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0778 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 86/500
Epoch 00086: val_loss did not improve from 0.00944
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0430 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0781 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.00944
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0447 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0780 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 88/500
Epoch 00088: val_loss did not improve from 0.00944
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0455 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0780 - lr: 1.0000e-05 - 237ms/epoch - 4ms/step
Epoch 89/500
Epoch 00089: val_loss did not improve from 0.00944
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0442 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0781 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 90/500
Epoch 00090: val_loss did not improve from 0.00944
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0427 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0780 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 91/500
Epoch 00091: val_loss did not improve from 0.00944
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0445 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0780 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 92/500
Epoch 00092: val_loss improved from 0.00944 to 0.00942, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0438 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0779 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 93/500
Epoch 00093: val_loss improved from 0.00942 to 0.00940, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0446 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0778 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 94/500
Epoch 00094: val_loss did not improve from 0.00940
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0459 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0778 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 95/500
Epoch 00095: val_loss improved from 0.00940 to 0.00939, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0453 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0777 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 96/500
Epoch 00096: val_loss improved from 0.00939 to 0.00938, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0451 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0777 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 97/500
Epoch 00097: val_loss improved from 0.00938 to 0.00937, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0434 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0777 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 98/500
Epoch 00098: val_loss did not improve from 0.00937
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0443 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0777 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 99/500
Epoch 00099: val_loss improved from 0.00937 to 0.00936, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0428 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0776 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 100/500
Epoch 00100: val_loss did not improve from 0.00936
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0436 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0777 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 101/500
Epoch 00101: val_loss improved from 0.00936 to 0.00933, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0416 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0775 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 102/500
Epoch 00102: val_loss improved from 0.00933 to 0.00933, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0414 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0774 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 103/500
Epoch 00103: val_loss did not improve from 0.00933
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0442 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0774 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 104/500
Epoch 00104: val_loss improved from 0.00933 to 0.00932, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0425 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0774 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 105/500
Epoch 00105: val_loss improved from 0.00932 to 0.00930, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0422 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0772 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 106/500
Epoch 00106: val_loss did not improve from 0.00930
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0772 - lr: 1.0000e-05 - 245ms/epoch - 4ms/step
Epoch 107/500
Epoch 00107: val_loss improved from 0.00930 to 0.00929, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0437 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0772 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 108/500
Epoch 00108: val_loss improved from 0.00929 to 0.00928, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0453 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0772 - lr: 1.0000e-05 - 335ms/epoch - 6ms/step
Epoch 109/500
Epoch 00109: val_loss improved from 0.00928 to 0.00926, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0456 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0771 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 110/500
Epoch 00110: val_loss improved from 0.00926 to 0.00925, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0423 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0771 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 111/500
Epoch 00111: val_loss improved from 0.00925 to 0.00925, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0430 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0771 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 112/500
Epoch 00112: val_loss improved from 0.00925 to 0.00924, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0443 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0770 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 113/500
Epoch 00113: val_loss improved from 0.00924 to 0.00924, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0433 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0770 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 114/500
Epoch 00114: val_loss improved from 0.00924 to 0.00923, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 115/500
Epoch 00115: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0455 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0769 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 116/500
Epoch 00116: val_loss improved from 0.00923 to 0.00923, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0450 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0769 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 117/500
Epoch 00117: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0425 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0769 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 118/500
Epoch 00118: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0437 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0770 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 119/500
Epoch 00119: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0435 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0770 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 120/500
Epoch 00120: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0441 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0770 - lr: 1.0000e-05 - 227ms/epoch - 4ms/step
Epoch 121/500
Epoch 00121: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0452 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0769 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 122/500
Epoch 00122: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0437 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0769 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 123/500
Epoch 00123: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0435 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0769 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 124/500
Epoch 00124: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0422 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 125/500
Epoch 00125: val_loss did not improve from 0.00923
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0428 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0769 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 126/500
Epoch 00126: val_loss improved from 0.00923 to 0.00922, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0424 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0769 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 127/500
Epoch 00127: val_loss improved from 0.00922 to 0.00921, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0439 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 128/500
Epoch 00128: val_loss did not improve from 0.00921
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0417 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 129/500
Epoch 00129: val_loss improved from 0.00921 to 0.00919, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0421 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 130/500
Epoch 00130: val_loss improved from 0.00919 to 0.00918, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0440 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 131/500
Epoch 00131: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0410 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 132/500
Epoch 00132: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0439 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 133/500
Epoch 00133: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0440 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 134/500
Epoch 00134: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0431 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 135/500
Epoch 00135: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0430 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 136/500
Epoch 00136: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0438 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 137/500
Epoch 00137: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0436 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0769 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 138/500
Epoch 00138: val_loss did not improve from 0.00918
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0419 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 139/500
Epoch 00139: val_loss improved from 0.00918 to 0.00917, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0434 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0766 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 140/500
Epoch 00140: val_loss improved from 0.00917 to 0.00916, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0421 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 141/500
Epoch 00141: val_loss improved from 0.00916 to 0.00916, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0448 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0767 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 142/500
Epoch 00142: val_loss did not improve from 0.00916
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0449 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0768 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 143/500
Epoch 00143: val_loss improved from 0.00916 to 0.00915, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0435 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0766 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 144/500
Epoch 00144: val_loss improved from 0.00915 to 0.00913, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0431 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0765 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 145/500
Epoch 00145: val_loss improved from 0.00913 to 0.00912, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0764 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 146/500
Epoch 00146: val_loss improved from 0.00912 to 0.00911, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0407 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0763 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step
Epoch 147/500
Epoch 00147: val_loss did not improve from 0.00911
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0419 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0763 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 148/500
Epoch 00148: val_loss did not improve from 0.00911
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0426 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0764 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 149/500
Epoch 00149: val_loss did not improve from 0.00911
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0417 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0764 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 150/500
Epoch 00150: val_loss improved from 0.00911 to 0.00909, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0424 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0762 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 151/500
Epoch 00151: val_loss improved from 0.00909 to 0.00909, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0435 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0761 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 152/500
Epoch 00152: val_loss improved from 0.00909 to 0.00908, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0442 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0761 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 153/500
Epoch 00153: val_loss did not improve from 0.00908
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0400 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0763 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 154/500
Epoch 00154: val_loss improved from 0.00908 to 0.00907, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0437 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0762 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 155/500
Epoch 00155: val_loss improved from 0.00907 to 0.00906, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0432 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0760 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 156/500
Epoch 00156: val_loss improved from 0.00906 to 0.00905, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0419 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0759 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 157/500
Epoch 00157: val_loss did not improve from 0.00905
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0440 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0759 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 158/500
Epoch 00158: val_loss did not improve from 0.00905
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0434 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0758 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 159/500
Epoch 00159: val_loss did not improve from 0.00905
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0438 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0759 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 160/500
Epoch 00160: val_loss improved from 0.00905 to 0.00904, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0426 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0757 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 161/500
Epoch 00161: val_loss did not improve from 0.00904
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0415 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0757 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 162/500
Epoch 00162: val_loss improved from 0.00904 to 0.00902, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0413 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0757 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 163/500
Epoch 00163: val_loss did not improve from 0.00902
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0426 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0758 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 164/500
Epoch 00164: val_loss did not improve from 0.00902
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0429 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0759 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 165/500
Epoch 00165: val_loss improved from 0.00902 to 0.00902, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0443 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0758 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 166/500
Epoch 00166: val_loss improved from 0.00902 to 0.00899, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0434 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0757 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 167/500
Epoch 00167: val_loss improved from 0.00899 to 0.00898, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0433 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0756 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 168/500
Epoch 00168: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0433 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0756 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 169/500
Epoch 00169: val_loss did not improve from 0.00898
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0433 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0756 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 170/500
Epoch 00170: val_loss improved from 0.00898 to 0.00895, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0417 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0754 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 171/500
Epoch 00171: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0439 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0754 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 172/500
Epoch 00172: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0755 - lr: 1.0000e-05 - 260ms/epoch - 4ms/step
Epoch 173/500
Epoch 00173: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0441 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0757 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 174/500
Epoch 00174: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0422 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0758 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 175/500
Epoch 00175: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0424 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0757 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 176/500
Epoch 00176: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0434 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0756 - lr: 1.0000e-05 - 257ms/epoch - 4ms/step
Epoch 177/500
Epoch 00177: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0414 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0756 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 178/500
Epoch 00178: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0440 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0756 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 179/500
Epoch 00179: val_loss did not improve from 0.00895
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0436 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0755 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 180/500
Epoch 00180: val_loss improved from 0.00895 to 0.00893, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0402 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0754 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 181/500
Epoch 00181: val_loss did not improve from 0.00893
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0426 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0754 - lr: 1.0000e-05 - 232ms/epoch - 4ms/step
Epoch 182/500
Epoch 00182: val_loss did not improve from 0.00893
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0444 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0753 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 183/500
Epoch 00183: val_loss improved from 0.00893 to 0.00893, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0425 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0753 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 184/500
Epoch 00184: val_loss improved from 0.00893 to 0.00891, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0401 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0752 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 185/500
Epoch 00185: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0427 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0753 - lr: 1.0000e-05 - 236ms/epoch - 4ms/step
Epoch 186/500
Epoch 00186: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0421 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0753 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 187/500
Epoch 00187: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0427 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0754 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 188/500
Epoch 00188: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0451 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0754 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 189/500
Epoch 00189: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0416 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0753 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 190/500
Epoch 00190: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0410 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0753 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 191/500
Epoch 00191: val_loss did not improve from 0.00891
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0436 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0753 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 192/500
Epoch 00192: val_loss improved from 0.00891 to 0.00891, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0437 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0752 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 193/500
Epoch 00193: val_loss improved from 0.00891 to 0.00889, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0427 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0751 - lr: 1.0000e-05 - 308ms/epoch - 5ms/step
Epoch 194/500
Epoch 00194: val_loss did not improve from 0.00889
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0427 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0752 - lr: 1.0000e-05 - 246ms/epoch - 4ms/step
Epoch 195/500
Epoch 00195: val_loss improved from 0.00889 to 0.00889, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0446 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0751 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 196/500
Epoch 00196: val_loss improved from 0.00889 to 0.00889, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0751 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 197/500
Epoch 00197: val_loss improved from 0.00889 to 0.00886, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0440 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0750 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 198/500
Epoch 00198: val_loss did not improve from 0.00886
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0751 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 199/500
Epoch 00199: val_loss improved from 0.00886 to 0.00885, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0422 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0750 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 200/500
Epoch 00200: val_loss did not improve from 0.00885
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0441 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0751 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 201/500
Epoch 00201: val_loss did not improve from 0.00885
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0411 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0750 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 202/500
Epoch 00202: val_loss did not improve from 0.00885
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0427 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0750 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 203/500
Epoch 00203: val_loss did not improve from 0.00885
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0426 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0750 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 204/500
Epoch 00204: val_loss did not improve from 0.00885
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0418 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0750 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 205/500
Epoch 00205: val_loss improved from 0.00885 to 0.00884, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0426 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0749 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 206/500
Epoch 00206: val_loss improved from 0.00884 to 0.00881, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0748 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 207/500
Epoch 00207: val_loss did not improve from 0.00881
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0424 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0748 - lr: 1.0000e-05 - 235ms/epoch - 4ms/step
Epoch 208/500
Epoch 00208: val_loss improved from 0.00881 to 0.00881, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0432 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0747 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 209/500
Epoch 00209: val_loss improved from 0.00881 to 0.00880, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0406 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0747 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 210/500
Epoch 00210: val_loss improved from 0.00880 to 0.00876, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0419 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 211/500
Epoch 00211: val_loss did not improve from 0.00876
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 212/500
Epoch 00212: val_loss improved from 0.00876 to 0.00876, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0410 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 213/500
Epoch 00213: val_loss improved from 0.00876 to 0.00876, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 214/500
Epoch 00214: val_loss did not improve from 0.00876
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0415 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 215/500
Epoch 00215: val_loss did not improve from 0.00876
58/58 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0455 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0746 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 216/500
Epoch 00216: val_loss improved from 0.00876 to 0.00875, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0433 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 217/500
Epoch 00217: val_loss improved from 0.00875 to 0.00874, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0412 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0745 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 218/500
Epoch 00218: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0418 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 219/500
Epoch 00219: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0423 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0746 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 220/500
Epoch 00220: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0416 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0746 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 221/500
Epoch 00221: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0426 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0746 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 222/500
Epoch 00222: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0387 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0746 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 223/500
Epoch 00223: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0432 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0746 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 224/500
Epoch 00224: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0425 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0745 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 225/500
Epoch 00225: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0415 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0745 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 226/500
Epoch 00226: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0409 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0745 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 227/500
Epoch 00227: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0423 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0744 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 228/500
Epoch 00228: val_loss improved from 0.00874 to 0.00874, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0431 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0743 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 229/500
Epoch 00229: val_loss did not improve from 0.00874
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0410 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0743 - lr: 1.0000e-05 - 234ms/epoch - 4ms/step
Epoch 230/500
Epoch 00230: val_loss improved from 0.00874 to 0.00871, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0442 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0742 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 231/500
Epoch 00231: val_loss improved from 0.00871 to 0.00870, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0424 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0742 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 232/500
Epoch 00232: val_loss improved from 0.00870 to 0.00866, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0425 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0740 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 233/500
Epoch 00233: val_loss improved from 0.00866 to 0.00865, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0398 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0740 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 234/500
Epoch 00234: val_loss did not improve from 0.00865
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0418 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0741 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 235/500
Epoch 00235: val_loss did not improve from 0.00865
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0418 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0742 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 236/500
Epoch 00236: val_loss did not improve from 0.00865
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0439 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0741 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 237/500
Epoch 00237: val_loss improved from 0.00865 to 0.00865, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0420 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0740 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 238/500
Epoch 00238: val_loss improved from 0.00865 to 0.00864, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0424 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 239/500
Epoch 00239: val_loss improved from 0.00864 to 0.00863, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0416 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 240/500
Epoch 00240: val_loss did not improve from 0.00863
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0420 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0739 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 241/500
Epoch 00241: val_loss improved from 0.00863 to 0.00861, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0409 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 242/500
Epoch 00242: val_loss did not improve from 0.00861
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0426 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0739 - lr: 1.0000e-05 - 236ms/epoch - 4ms/step
Epoch 243/500
Epoch 00243: val_loss did not improve from 0.00861
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0429 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0739 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 244/500
Epoch 00244: val_loss did not improve from 0.00861
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0420 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 271ms/epoch - 5ms/step
Epoch 245/500
Epoch 00245: val_loss did not improve from 0.00861
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0420 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 246/500
Epoch 00246: val_loss improved from 0.00861 to 0.00861, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0396 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 247/500
Epoch 00247: val_loss improved from 0.00861 to 0.00860, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0414 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 248/500
Epoch 00248: val_loss did not improve from 0.00860
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0394 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 249/500
Epoch 00249: val_loss did not improve from 0.00860
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0413 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 250/500
Epoch 00250: val_loss improved from 0.00860 to 0.00859, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0428 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 319ms/epoch - 5ms/step
Epoch 251/500
Epoch 00251: val_loss did not improve from 0.00859
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0405 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 252/500
Epoch 00252: val_loss did not improve from 0.00859
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0420 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 253/500
Epoch 00253: val_loss did not improve from 0.00859
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0409 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 254/500
Epoch 00254: val_loss did not improve from 0.00859
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0413 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0738 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 255/500
Epoch 00255: val_loss improved from 0.00859 to 0.00858, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0414 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 256/500
Epoch 00256: val_loss improved from 0.00858 to 0.00857, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0412 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 257/500
Epoch 00257: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0422 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 258/500
Epoch 00258: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0401 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 259/500
Epoch 00259: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0412 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 260/500
Epoch 00260: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0433 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 261/500
Epoch 00261: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0421 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0737 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 262/500
Epoch 00262: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0410 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 263/500
Epoch 00263: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0401 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 264/500
Epoch 00264: val_loss did not improve from 0.00857
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0404 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 265/500
Epoch 00265: val_loss improved from 0.00857 to 0.00854, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0400 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0734 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 266/500
Epoch 00266: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0414 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 267/500
Epoch 00267: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0396 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 268/500
Epoch 00268: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 269/500
Epoch 00269: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0417 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0735 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 270/500
Epoch 00270: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0416 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0735 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 271/500
Epoch 00271: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0735 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 272/500
Epoch 00272: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 273/500
Epoch 00273: val_loss did not improve from 0.00854
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0419 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0736 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 274/500
Epoch 00274: val_loss improved from 0.00854 to 0.00853, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0402 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0735 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 275/500
Epoch 00275: val_loss improved from 0.00853 to 0.00851, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0424 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0733 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 276/500
Epoch 00276: val_loss did not improve from 0.00851
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0421 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0733 - lr: 1.0000e-05 - 237ms/epoch - 4ms/step
Epoch 277/500
Epoch 00277: val_loss did not improve from 0.00851
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0420 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0734 - lr: 1.0000e-05 - 245ms/epoch - 4ms/step
Epoch 278/500
Epoch 00278: val_loss did not improve from 0.00851
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0422 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0734 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 279/500
Epoch 00279: val_loss did not improve from 0.00851
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0410 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0734 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 280/500
Epoch 00280: val_loss improved from 0.00851 to 0.00850, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0389 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0733 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 281/500
Epoch 00281: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0732 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 282/500
Epoch 00282: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0414 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0732 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 283/500
Epoch 00283: val_loss did not improve from 0.00850
58/58 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0431 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0733 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 284/500
Epoch 00284: val_loss improved from 0.00850 to 0.00848, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0403 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 285/500
Epoch 00285: val_loss did not improve from 0.00848
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0420 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 236ms/epoch - 4ms/step
Epoch 286/500
Epoch 00286: val_loss improved from 0.00848 to 0.00846, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0404 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 287/500
Epoch 00287: val_loss improved from 0.00846 to 0.00846, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 288/500
Epoch 00288: val_loss did not improve from 0.00846
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0416 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 289/500
Epoch 00289: val_loss did not improve from 0.00846
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0405 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 264ms/epoch - 5ms/step
Epoch 290/500
Epoch 00290: val_loss did not improve from 0.00846
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0398 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 291/500
Epoch 00291: val_loss did not improve from 0.00846
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 292/500
Epoch 00292: val_loss did not improve from 0.00846
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0401 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 293/500
Epoch 00293: val_loss improved from 0.00846 to 0.00844, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0730 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 294/500
Epoch 00294: val_loss did not improve from 0.00844
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0413 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 295/500
Epoch 00295: val_loss did not improve from 0.00844
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0408 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0731 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 296/500
Epoch 00296: val_loss did not improve from 0.00844
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0413 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0730 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 297/500
Epoch 00297: val_loss did not improve from 0.00844
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0424 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0730 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 298/500
Epoch 00298: val_loss improved from 0.00844 to 0.00843, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0398 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0729 - lr: 1.0000e-05 - 261ms/epoch - 4ms/step
Epoch 299/500
Epoch 00299: val_loss improved from 0.00843 to 0.00842, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0422 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0729 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 300/500
Epoch 00300: val_loss improved from 0.00842 to 0.00840, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0430 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0728 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 301/500
Epoch 00301: val_loss improved from 0.00840 to 0.00839, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0404 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0727 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 302/500
Epoch 00302: val_loss did not improve from 0.00839
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0402 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0727 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 303/500
Epoch 00303: val_loss improved from 0.00839 to 0.00838, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0395 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0726 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 304/500
Epoch 00304: val_loss improved from 0.00838 to 0.00838, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0412 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0727 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 305/500
Epoch 00305: val_loss did not improve from 0.00838
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0405 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0727 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 306/500
Epoch 00306: val_loss improved from 0.00838 to 0.00837, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0408 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0726 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 307/500
Epoch 00307: val_loss improved from 0.00837 to 0.00836, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0408 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0726 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 308/500
Epoch 00308: val_loss improved from 0.00836 to 0.00834, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0409 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0724 - lr: 1.0000e-05 - 276ms/epoch - 5ms/step
Epoch 309/500
Epoch 00309: val_loss improved from 0.00834 to 0.00833, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0427 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0724 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 310/500
Epoch 00310: val_loss improved from 0.00833 to 0.00833, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0413 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0724 - lr: 1.0000e-05 - 254ms/epoch - 4ms/step
Epoch 311/500
Epoch 00311: val_loss improved from 0.00833 to 0.00833, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0417 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0724 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 312/500
Epoch 00312: val_loss did not improve from 0.00833
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0409 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0725 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 313/500
Epoch 00313: val_loss improved from 0.00833 to 0.00831, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0428 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0723 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 314/500
Epoch 00314: val_loss did not improve from 0.00831
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0412 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0724 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 315/500
Epoch 00315: val_loss did not improve from 0.00831
58/58 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0434 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0723 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 316/500
Epoch 00316: val_loss improved from 0.00831 to 0.00830, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0422 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0723 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 317/500
Epoch 00317: val_loss did not improve from 0.00830
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0409 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0724 - lr: 1.0000e-05 - 229ms/epoch - 4ms/step
Epoch 318/500
Epoch 00318: val_loss improved from 0.00830 to 0.00829, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0414 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0722 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 319/500
Epoch 00319: val_loss improved from 0.00829 to 0.00828, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 320/500
Epoch 00320: val_loss improved from 0.00828 to 0.00827, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0410 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 321/500
Epoch 00321: val_loss did not improve from 0.00827
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0425 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 310ms/epoch - 5ms/step
Epoch 322/500
Epoch 00322: val_loss improved from 0.00827 to 0.00826, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0395 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 323/500
Epoch 00323: val_loss did not improve from 0.00826
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0423 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0723 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 324/500
Epoch 00324: val_loss did not improve from 0.00826
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0422 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 325/500
Epoch 00325: val_loss improved from 0.00826 to 0.00824, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0719 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 326/500
Epoch 00326: val_loss did not improve from 0.00824
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0420 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0719 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 327/500
Epoch 00327: val_loss improved from 0.00824 to 0.00822, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0395 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0718 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 328/500
Epoch 00328: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0417 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0720 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 329/500
Epoch 00329: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0409 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 330/500
Epoch 00330: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0720 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 331/500
Epoch 00331: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0414 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0721 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 332/500
Epoch 00332: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0402 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0720 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 333/500
Epoch 00333: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0403 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0720 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 334/500
Epoch 00334: val_loss did not improve from 0.00822
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0402 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0720 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 335/500
Epoch 00335: val_loss improved from 0.00822 to 0.00821, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0414 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0718 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 336/500
Epoch 00336: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0423 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0719 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 337/500
Epoch 00337: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0400 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0719 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 338/500
Epoch 00338: val_loss did not improve from 0.00821
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0399 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0719 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 339/500
Epoch 00339: val_loss improved from 0.00821 to 0.00821, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0407 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0718 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 340/500
Epoch 00340: val_loss improved from 0.00821 to 0.00820, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0407 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0717 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 341/500
Epoch 00341: val_loss improved from 0.00820 to 0.00819, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0408 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0717 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 342/500
Epoch 00342: val_loss improved from 0.00819 to 0.00818, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0404 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 283ms/epoch - 5ms/step
Epoch 343/500
Epoch 00343: val_loss improved from 0.00818 to 0.00817, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0380 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 305ms/epoch - 5ms/step
Epoch 344/500
Epoch 00344: val_loss did not improve from 0.00817
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0388 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0718 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 345/500
Epoch 00345: val_loss did not improve from 0.00817
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0406 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0719 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 346/500
Epoch 00346: val_loss did not improve from 0.00817
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0387 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 347/500
Epoch 00347: val_loss improved from 0.00817 to 0.00815, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0416 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0715 - lr: 1.0000e-05 - 270ms/epoch - 5ms/step
Epoch 348/500
Epoch 00348: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0420 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 349/500
Epoch 00349: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0397 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 350/500
Epoch 00350: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0401 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0717 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 351/500
Epoch 00351: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0405 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 246ms/epoch - 4ms/step
Epoch 352/500
Epoch 00352: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0398 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 353/500
Epoch 00353: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0397 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0717 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 354/500
Epoch 00354: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0413 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0717 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 355/500
Epoch 00355: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0393 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 245ms/epoch - 4ms/step
Epoch 356/500
Epoch 00356: val_loss did not improve from 0.00815
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0401 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0715 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 357/500
Epoch 00357: val_loss improved from 0.00815 to 0.00815, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0399 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0714 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 358/500
Epoch 00358: val_loss improved from 0.00815 to 0.00813, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0409 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0714 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 359/500
Epoch 00359: val_loss improved from 0.00813 to 0.00812, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0408 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0713 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 360/500
Epoch 00360: val_loss improved from 0.00812 to 0.00808, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0387 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0711 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 361/500
Epoch 00361: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0377 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0712 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 362/500
Epoch 00362: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0713 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 363/500
Epoch 00363: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0394 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0713 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 364/500
Epoch 00364: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0418 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0714 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 365/500
Epoch 00365: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0402 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0713 - lr: 1.0000e-05 - 319ms/epoch - 6ms/step
Epoch 366/500
Epoch 00366: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0394 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0714 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 367/500
Epoch 00367: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0410 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0713 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 368/500
Epoch 00368: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0405 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0714 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 369/500
Epoch 00369: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0403 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0715 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 370/500
Epoch 00370: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0400 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0716 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 371/500
Epoch 00371: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0394 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0717 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 372/500
Epoch 00372: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0397 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0714 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 373/500
Epoch 00373: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0391 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0712 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 374/500
Epoch 00374: val_loss did not improve from 0.00808
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0711 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 375/500
Epoch 00375: val_loss improved from 0.00808 to 0.00806, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0405 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0709 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 376/500
Epoch 00376: val_loss improved from 0.00806 to 0.00803, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0411 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0708 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 377/500
Epoch 00377: val_loss improved from 0.00803 to 0.00802, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0408 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0708 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 378/500
Epoch 00378: val_loss did not improve from 0.00802
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0390 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0708 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 379/500
Epoch 00379: val_loss improved from 0.00802 to 0.00799, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0706 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 380/500
Epoch 00380: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0402 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0707 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 381/500
Epoch 00381: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0393 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0706 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 382/500
Epoch 00382: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0425 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0709 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 383/500
Epoch 00383: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0416 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0709 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 384/500
Epoch 00384: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0399 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0711 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 385/500
Epoch 00385: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0413 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0710 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 386/500
Epoch 00386: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0401 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0711 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 387/500
Epoch 00387: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0378 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0710 - lr: 1.0000e-05 - 253ms/epoch - 4ms/step
Epoch 388/500
Epoch 00388: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0405 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0709 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 389/500
Epoch 00389: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0383 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0707 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 390/500
Epoch 00390: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0417 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0706 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 391/500
Epoch 00391: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0424 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0708 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 392/500
Epoch 00392: val_loss did not improve from 0.00799
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0416 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0706 - lr: 1.0000e-05 - 245ms/epoch - 4ms/step
Epoch 393/500
Epoch 00393: val_loss improved from 0.00799 to 0.00797, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0397 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0704 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 394/500
Epoch 00394: val_loss improved from 0.00797 to 0.00794, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0391 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 395/500
Epoch 00395: val_loss did not improve from 0.00794
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0385 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0703 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 396/500
Epoch 00396: val_loss improved from 0.00794 to 0.00792, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0385 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 397/500
Epoch 00397: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0425 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0703 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 398/500
Epoch 00398: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0400 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0704 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 399/500
Epoch 00399: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0404 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0704 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 400/500
Epoch 00400: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0390 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0704 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 401/500
Epoch 00401: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0401 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0704 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 402/500
Epoch 00402: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0393 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0703 - lr: 1.0000e-05 - 257ms/epoch - 4ms/step
Epoch 403/500
Epoch 00403: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0412 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 404/500
Epoch 00404: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0386 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0703 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 405/500
Epoch 00405: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0415 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 306ms/epoch - 5ms/step
Epoch 406/500
Epoch 00406: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0415 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0703 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 407/500
Epoch 00407: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0391 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0703 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 408/500
Epoch 00408: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0400 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 409/500
Epoch 00409: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0411 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0704 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 410/500
Epoch 00410: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0390 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0705 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 411/500
Epoch 00411: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0403 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0705 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 412/500
Epoch 00412: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0387 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 291ms/epoch - 5ms/step
Epoch 413/500
Epoch 00413: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0402 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0702 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 414/500
Epoch 00414: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0703 - lr: 1.0000e-05 - 233ms/epoch - 4ms/step
Epoch 415/500
Epoch 00415: val_loss did not improve from 0.00792
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0400 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0701 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 416/500
Epoch 00416: val_loss improved from 0.00792 to 0.00787, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0382 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 417/500
Epoch 00417: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0393 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 256ms/epoch - 4ms/step
Epoch 418/500
Epoch 00418: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0023 - mse: 0.0023 - mae: 0.0375 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 241ms/epoch - 4ms/step
Epoch 419/500
Epoch 00419: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0390 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 420/500
Epoch 00420: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0381 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 245ms/epoch - 4ms/step
Epoch 421/500
Epoch 00421: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0395 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0701 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 422/500
Epoch 00422: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0402 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0701 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 423/500
Epoch 00423: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0407 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0701 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 424/500
Epoch 00424: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0403 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0701 - lr: 1.0000e-05 - 263ms/epoch - 5ms/step
Epoch 425/500
Epoch 00425: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0408 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 261ms/epoch - 5ms/step
Epoch 426/500
Epoch 00426: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0401 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 427/500
Epoch 00427: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0382 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0701 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 428/500
Epoch 00428: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0386 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0702 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 429/500
Epoch 00429: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0412 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0702 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 430/500
Epoch 00430: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0390 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 246ms/epoch - 4ms/step
Epoch 431/500
Epoch 00431: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0408 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0700 - lr: 1.0000e-05 - 234ms/epoch - 4ms/step
Epoch 432/500
Epoch 00432: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0391 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0698 - lr: 1.0000e-05 - 255ms/epoch - 4ms/step
Epoch 433/500
Epoch 00433: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0410 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0698 - lr: 1.0000e-05 - 315ms/epoch - 5ms/step
Epoch 434/500
Epoch 00434: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0386 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0699 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 435/500
Epoch 00435: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0383 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0699 - lr: 1.0000e-05 - 259ms/epoch - 4ms/step
Epoch 436/500
Epoch 00436: val_loss did not improve from 0.00787
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0393 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0698 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 437/500
Epoch 00437: val_loss improved from 0.00787 to 0.00785, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0393 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0696 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 438/500
Epoch 00438: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0414 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0696 - lr: 1.0000e-05 - 231ms/epoch - 4ms/step
Epoch 439/500
Epoch 00439: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0390 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0698 - lr: 1.0000e-05 - 232ms/epoch - 4ms/step
Epoch 440/500
Epoch 00440: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0393 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0699 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 441/500
Epoch 00441: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0400 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0696 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 442/500
Epoch 00442: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0392 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0697 - lr: 1.0000e-05 - 316ms/epoch - 5ms/step
Epoch 443/500
Epoch 00443: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0399 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0697 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 444/500
Epoch 00444: val_loss did not improve from 0.00785
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0398 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0696 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 445/500
Epoch 00445: val_loss improved from 0.00785 to 0.00784, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0388 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0695 - lr: 1.0000e-05 - 322ms/epoch - 6ms/step
Epoch 446/500
Epoch 00446: val_loss improved from 0.00784 to 0.00782, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0398 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0694 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 447/500
Epoch 00447: val_loss improved from 0.00782 to 0.00781, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0420 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 339ms/epoch - 6ms/step
Epoch 448/500
Epoch 00448: val_loss improved from 0.00781 to 0.00776, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 274ms/epoch - 5ms/step
Epoch 449/500
Epoch 00449: val_loss improved from 0.00776 to 0.00775, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0403 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0691 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 450/500
Epoch 00450: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0400 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 243ms/epoch - 4ms/step
Epoch 451/500
Epoch 00451: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0407 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 452/500
Epoch 00452: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0382 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 232ms/epoch - 4ms/step
Epoch 453/500
Epoch 00453: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0388 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 257ms/epoch - 4ms/step
Epoch 454/500
Epoch 00454: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0395 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 455/500
Epoch 00455: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0403 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 456/500
Epoch 00456: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0388 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 457/500
Epoch 00457: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0391 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 240ms/epoch - 4ms/step
Epoch 458/500
Epoch 00458: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0386 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 459/500
Epoch 00459: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0401 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 268ms/epoch - 5ms/step
Epoch 460/500
Epoch 00460: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0400 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 249ms/epoch - 4ms/step
Epoch 461/500
Epoch 00461: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0407 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 269ms/epoch - 5ms/step
Epoch 462/500
Epoch 00462: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0394 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 463/500
Epoch 00463: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0402 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0694 - lr: 1.0000e-05 - 293ms/epoch - 5ms/step
Epoch 464/500
Epoch 00464: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0395 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0693 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 465/500
Epoch 00465: val_loss did not improve from 0.00775
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0394 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 282ms/epoch - 5ms/step
Epoch 466/500
Epoch 00466: val_loss improved from 0.00775 to 0.00774, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0385 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0689 - lr: 1.0000e-05 - 265ms/epoch - 5ms/step
Epoch 467/500
Epoch 00467: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0400 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 250ms/epoch - 4ms/step
Epoch 468/500
Epoch 00468: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0409 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 248ms/epoch - 4ms/step
Epoch 469/500
Epoch 00469: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0401 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0692 - lr: 1.0000e-05 - 273ms/epoch - 5ms/step
Epoch 470/500
Epoch 00470: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0399 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 258ms/epoch - 4ms/step
Epoch 471/500
Epoch 00471: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0397 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 235ms/epoch - 4ms/step
Epoch 472/500
Epoch 00472: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0390 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0691 - lr: 1.0000e-05 - 235ms/epoch - 4ms/step
Epoch 473/500
Epoch 00473: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0382 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0689 - lr: 1.0000e-05 - 272ms/epoch - 5ms/step
Epoch 474/500
Epoch 00474: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0390 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 286ms/epoch - 5ms/step
Epoch 475/500
Epoch 00475: val_loss did not improve from 0.00774
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0393 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0689 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 476/500
Epoch 00476: val_loss improved from 0.00774 to 0.00773, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0394 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0687 - lr: 1.0000e-05 - 266ms/epoch - 5ms/step
Epoch 477/500
Epoch 00477: val_loss improved from 0.00773 to 0.00772, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0401 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0687 - lr: 1.0000e-05 - 334ms/epoch - 6ms/step
Epoch 478/500
Epoch 00478: val_loss improved from 0.00772 to 0.00769, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0388 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0686 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 479/500
Epoch 00479: val_loss improved from 0.00769 to 0.00769, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0377 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0686 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 480/500
Epoch 00480: val_loss improved from 0.00769 to 0.00768, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0411 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0685 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 481/500
Epoch 00481: val_loss improved from 0.00768 to 0.00767, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0385 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0685 - lr: 1.0000e-05 - 279ms/epoch - 5ms/step
Epoch 482/500
Epoch 00482: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0398 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0687 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 483/500
Epoch 00483: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0410 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0686 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 484/500
Epoch 00484: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0406 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0689 - lr: 1.0000e-05 - 247ms/epoch - 4ms/step
Epoch 485/500
Epoch 00485: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0023 - mse: 0.0023 - mae: 0.0373 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0693 - lr: 1.0000e-05 - 252ms/epoch - 4ms/step
Epoch 486/500
Epoch 00486: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0384 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 487/500
Epoch 00487: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0386 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0688 - lr: 1.0000e-05 - 244ms/epoch - 4ms/step
Epoch 488/500
Epoch 00488: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0404 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0690 - lr: 1.0000e-05 - 313ms/epoch - 5ms/step
Epoch 489/500
Epoch 00489: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0407 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0686 - lr: 1.0000e-05 - 290ms/epoch - 5ms/step
Epoch 490/500
Epoch 00490: val_loss did not improve from 0.00767
58/58 - 0s - loss: 0.0028 - mse: 0.0028 - mae: 0.0404 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0686 - lr: 1.0000e-05 - 281ms/epoch - 5ms/step
Epoch 491/500
Epoch 00491: val_loss improved from 0.00767 to 0.00767, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0023 - mse: 0.0023 - mae: 0.0377 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0684 - lr: 1.0000e-05 - 275ms/epoch - 5ms/step
Epoch 492/500
Epoch 00492: val_loss improved from 0.00767 to 0.00763, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0395 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0683 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 493/500
Epoch 00493: val_loss improved from 0.00763 to 0.00761, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0396 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0683 - lr: 1.0000e-05 - 251ms/epoch - 4ms/step
Epoch 494/500
Epoch 00494: val_loss improved from 0.00761 to 0.00758, saving model to LSTM7.h5
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0393 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0682 - lr: 1.0000e-05 - 351ms/epoch - 6ms/step
Epoch 495/500
Epoch 00495: val_loss did not improve from 0.00758
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0388 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0681 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 496/500
Epoch 00496: val_loss did not improve from 0.00758
58/58 - 0s - loss: 0.0026 - mse: 0.0026 - mae: 0.0392 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0681 - lr: 1.0000e-05 - 238ms/epoch - 4ms/step
Epoch 497/500
Epoch 00497: val_loss did not improve from 0.00758
58/58 - 0s - loss: 0.0027 - mse: 0.0027 - mae: 0.0389 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0682 - lr: 1.0000e-05 - 267ms/epoch - 5ms/step
Epoch 498/500
Epoch 00498: val_loss did not improve from 0.00758
58/58 - 0s - loss: 0.0025 - mse: 0.0025 - mae: 0.0380 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0683 - lr: 1.0000e-05 - 299ms/epoch - 5ms/step
Epoch 499/500
Epoch 00499: val_loss did not improve from 0.00758
58/58 - 0s - loss: 0.0024 - mse: 0.0024 - mae: 0.0380 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0684 - lr: 1.0000e-05 - 242ms/epoch - 4ms/step
Epoch 500/500
Epoch 00500: val_loss did not improve from 0.00758
58/58 - 0s - loss: 0.0023 - mse: 0.0023 - mae: 0.0375 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0685 - lr: 1.0000e-05 - 245ms/epoch - 4ms/step
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 45.539825469272486
RMSE: 6.748320196113436
MAPE: 5.43245952292463
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 42.30488040231578
RMSE: 6.504220199402522
MAPE: 5.010195929360332
DEMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 54.48% Accuracy
MSE: 23.305922116020078
RMSE: 4.827620751055335
MAPE: 3.7452201197397774
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 18.082341646298453
RMSE: 4.252333670621163
MAPE: 3.4333194517527637
MIDPOINT
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 91.59813707600279
RMSE: 9.57069156727991
MAPE: 7.718313236319782
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16837.838, Time=3.70 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14497.319, Time=3.94 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16084.348, Time=6.70 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.920, Time=11.85 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15304.480, Time=11.29 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15949.053, Time=12.81 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17059.707, Time=11.96 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15313.920, Time=14.47 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16054.952, Time=13.41 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11445.350, Time=34.67 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 124.799 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8561.853
Date: Sun, 12 Dec 2021 AIC -17059.707
Time: 18:58:22 BIC -16909.600
Sample: 0 HQIC -17002.059
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.003e-07 7.69e-05 -0.001 0.999 -0.000 0.000
x2 -1.001e-07 7.44e-05 -0.001 0.999 -0.000 0.000
x3 -1.006e-07 7.84e-05 -0.001 0.999 -0.000 0.000
x4 1.0000 7.11e-05 1.41e+04 0.000 1.000 1.000
x5 -9.611e-08 6.77e-05 -0.001 0.999 -0.000 0.000
x6 -1.249e-07 4.06e-05 -0.003 0.998 -7.96e-05 7.94e-05
x7 -1e-07 7.89e-05 -0.001 0.999 -0.000 0.000
x8 -0.0002 9.43e-05 -1.838 0.066 -0.000 1.15e-05
x9 2.853e-08 9.89e-05 0.000 1.000 -0.000 0.000
x10 -4.022e-05 0.000 -0.200 0.842 -0.000 0.000
x11 0.0003 7e-05 4.122 0.000 0.000 0.000
x12 7.55e-05 0.000 0.633 0.527 -0.000 0.000
x13 -1.005e-07 7.29e-05 -0.001 0.999 -0.000 0.000
x14 -2.756e-07 0.000 -0.001 0.999 -0.000 0.000
x15 -8.419e-08 8.98e-05 -0.001 0.999 -0.000 0.000
x16 -2.171e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.105e-07 9.93e-05 -0.001 0.999 -0.000 0.000
x18 1.263e-07 3.22e-05 0.004 0.997 -6.31e-05 6.33e-05
x19 -8.769e-08 0.000 -0.001 0.999 -0.000 0.000
x20 -5.772e-08 0.000 -0.000 1.000 -0.000 0.000
x21 -9.77e-08 0.000 -0.001 1.000 -0.000 0.000
x22 -3.686e-12 7.09e-07 -5.2e-06 1.000 -1.39e-06 1.39e-06
x23 -9.216e-12 2.4e-05 -3.83e-07 1.000 -4.71e-05 4.71e-05
x24 -3.648e-07 0.000 -0.001 0.999 -0.001 0.001
x25 -1.391e-07 0.001 -0.000 1.000 -0.002 0.002
x26 -3.142e-07 0.000 -0.001 0.999 -0.001 0.001
x27 -3.042e-07 5.47e-05 -0.006 0.996 -0.000 0.000
x28 -1.785e-07 0.000 -0.001 0.999 -0.000 0.000
x29 -1.909e-07 0.000 -0.001 1.000 -0.001 0.001
ma.L1 -1.3901 8.24e-06 -1.69e+05 0.000 -1.390 -1.390
ma.L2 0.4035 2.01e-05 2.01e+04 0.000 0.403 0.404
sigma2 7.538e-11 6.94e-11 1.085 0.278 -6.07e-11 2.11e-10
===================================================================================
Ljung-Box (L1) (Q): 69.36 Jarque-Bera (JB): 6470073.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -12.55
Prob(H) (two-sided): 0.00 Kurtosis: 441.48
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.58e+22. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04050, saving model to LSTM7.h5
43/43 - 3s - loss: 0.0770 - mse: 0.0770 - mae: 0.2109 - val_loss: 0.0405 - val_mse: 0.0405 - val_mae: 0.1509 - lr: 0.0010 - 3s/epoch - 61ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04050
43/43 - 0s - loss: 0.0220 - mse: 0.0220 - mae: 0.1194 - val_loss: 0.0447 - val_mse: 0.0447 - val_mae: 0.1626 - lr: 0.0010 - 223ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.04050 to 0.01821, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0221 - mse: 0.0221 - mae: 0.1186 - val_loss: 0.0182 - val_mse: 0.0182 - val_mae: 0.1018 - lr: 0.0010 - 250ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0231 - mse: 0.0231 - mae: 0.1213 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1161 - lr: 0.0010 - 237ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0167 - mse: 0.0167 - mae: 0.1010 - val_loss: 0.0191 - val_mse: 0.0191 - val_mae: 0.1143 - lr: 0.0010 - 182ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0295 - mse: 0.0295 - mae: 0.1410 - val_loss: 0.0344 - val_mse: 0.0344 - val_mae: 0.1468 - lr: 0.0010 - 198ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0164 - mse: 0.0164 - mae: 0.0990 - val_loss: 0.0503 - val_mse: 0.0503 - val_mae: 0.1967 - lr: 0.0010 - 178ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00008: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0357 - mse: 0.0357 - mae: 0.1616 - val_loss: 0.0665 - val_mse: 0.0665 - val_mae: 0.2258 - lr: 0.0010 - 192ms/epoch - 4ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0141 - mse: 0.0141 - mae: 0.0950 - val_loss: 0.0350 - val_mse: 0.0350 - val_mae: 0.1511 - lr: 1.0000e-04 - 200ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0643 - val_loss: 0.0279 - val_mse: 0.0279 - val_mae: 0.1326 - lr: 1.0000e-04 - 180ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0069 - mse: 0.0069 - mae: 0.0650 - val_loss: 0.0237 - val_mse: 0.0237 - val_mae: 0.1212 - lr: 1.0000e-04 - 193ms/epoch - 4ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0617 - val_loss: 0.0214 - val_mse: 0.0214 - val_mae: 0.1146 - lr: 1.0000e-04 - 181ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00013: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0611 - val_loss: 0.0194 - val_mse: 0.0194 - val_mae: 0.1088 - lr: 1.0000e-04 - 184ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0626 - val_loss: 0.0193 - val_mse: 0.0193 - val_mae: 0.1085 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0619 - val_loss: 0.0192 - val_mse: 0.0192 - val_mae: 0.1083 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0636 - val_loss: 0.0190 - val_mse: 0.0190 - val_mae: 0.1077 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0064 - mse: 0.0064 - mae: 0.0626 - val_loss: 0.0188 - val_mse: 0.0188 - val_mae: 0.1071 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00018: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0603 - val_loss: 0.0187 - val_mse: 0.0187 - val_mae: 0.1067 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0605 - val_loss: 0.0185 - val_mse: 0.0185 - val_mae: 0.1063 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.01821
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0598 - val_loss: 0.0183 - val_mse: 0.0183 - val_mae: 0.1057 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss improved from 0.01821 to 0.01815, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0612 - val_loss: 0.0181 - val_mse: 0.0181 - val_mae: 0.1053 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: val_loss improved from 0.01815 to 0.01805, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0603 - val_loss: 0.0180 - val_mse: 0.0180 - val_mae: 0.1050 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss improved from 0.01805 to 0.01794, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0611 - val_loss: 0.0179 - val_mse: 0.0179 - val_mae: 0.1047 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss improved from 0.01794 to 0.01781, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0617 - val_loss: 0.0178 - val_mse: 0.0178 - val_mae: 0.1043 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss improved from 0.01781 to 0.01755, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0607 - val_loss: 0.0176 - val_mse: 0.0176 - val_mae: 0.1036 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss improved from 0.01755 to 0.01736, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0589 - val_loss: 0.0174 - val_mse: 0.0174 - val_mae: 0.1030 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss improved from 0.01736 to 0.01717, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0606 - val_loss: 0.0172 - val_mse: 0.0172 - val_mae: 0.1025 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss improved from 0.01717 to 0.01703, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0616 - val_loss: 0.0170 - val_mse: 0.0170 - val_mae: 0.1021 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss improved from 0.01703 to 0.01692, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0169 - val_mse: 0.0169 - val_mae: 0.1018 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss improved from 0.01692 to 0.01675, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0063 - mse: 0.0063 - mae: 0.0618 - val_loss: 0.0168 - val_mse: 0.0168 - val_mae: 0.1012 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss improved from 0.01675 to 0.01659, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0628 - val_loss: 0.0166 - val_mse: 0.0166 - val_mae: 0.1007 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss improved from 0.01659 to 0.01650, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0060 - mse: 0.0060 - mae: 0.0592 - val_loss: 0.0165 - val_mse: 0.0165 - val_mae: 0.1004 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss improved from 0.01650 to 0.01635, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0562 - val_loss: 0.0163 - val_mse: 0.0163 - val_mae: 0.1000 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss improved from 0.01635 to 0.01617, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0575 - val_loss: 0.0162 - val_mse: 0.0162 - val_mae: 0.0995 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss improved from 0.01617 to 0.01615, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0587 - val_loss: 0.0162 - val_mse: 0.0162 - val_mae: 0.0994 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.01615
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0603 - val_loss: 0.0162 - val_mse: 0.0162 - val_mae: 0.0995 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss improved from 0.01615 to 0.01592, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0603 - val_loss: 0.0159 - val_mse: 0.0159 - val_mae: 0.0986 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss improved from 0.01592 to 0.01564, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0588 - val_loss: 0.0156 - val_mse: 0.0156 - val_mae: 0.0978 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 39/500
Epoch 00039: val_loss improved from 0.01564 to 0.01561, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0062 - mse: 0.0062 - mae: 0.0615 - val_loss: 0.0156 - val_mse: 0.0156 - val_mae: 0.0977 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss improved from 0.01561 to 0.01549, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0589 - val_loss: 0.0155 - val_mse: 0.0155 - val_mae: 0.0973 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.01549
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0576 - val_loss: 0.0156 - val_mse: 0.0156 - val_mae: 0.0975 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss improved from 0.01549 to 0.01544, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0592 - val_loss: 0.0154 - val_mse: 0.0154 - val_mae: 0.0971 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss improved from 0.01544 to 0.01533, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0561 - val_loss: 0.0153 - val_mse: 0.0153 - val_mae: 0.0967 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss improved from 0.01533 to 0.01516, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0058 - mse: 0.0058 - mae: 0.0599 - val_loss: 0.0152 - val_mse: 0.0152 - val_mae: 0.0962 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss improved from 0.01516 to 0.01501, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0582 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.0957 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss improved from 0.01501 to 0.01486, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0585 - val_loss: 0.0149 - val_mse: 0.0149 - val_mae: 0.0952 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.01486
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0587 - val_loss: 0.0149 - val_mse: 0.0149 - val_mae: 0.0952 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.01486
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0577 - val_loss: 0.0150 - val_mse: 0.0150 - val_mae: 0.0955 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.01486
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0588 - val_loss: 0.0149 - val_mse: 0.0149 - val_mae: 0.0954 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss improved from 0.01486 to 0.01481, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0061 - mse: 0.0061 - mae: 0.0617 - val_loss: 0.0148 - val_mse: 0.0148 - val_mae: 0.0950 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss improved from 0.01481 to 0.01477, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0574 - val_loss: 0.0148 - val_mse: 0.0148 - val_mae: 0.0948 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.01477
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0566 - val_loss: 0.0149 - val_mse: 0.0149 - val_mae: 0.0952 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.01477
43/43 - 0s - loss: 0.0059 - mse: 0.0059 - mae: 0.0591 - val_loss: 0.0148 - val_mse: 0.0148 - val_mae: 0.0949 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss improved from 0.01477 to 0.01456, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0561 - val_loss: 0.0146 - val_mse: 0.0146 - val_mae: 0.0942 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.01456
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0558 - val_loss: 0.0146 - val_mse: 0.0146 - val_mae: 0.0943 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss improved from 0.01456 to 0.01444, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0577 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0938 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss improved from 0.01444 to 0.01443, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0571 - val_loss: 0.0144 - val_mse: 0.0144 - val_mae: 0.0937 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 58/500
Epoch 00058: val_loss improved from 0.01443 to 0.01425, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0583 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0931 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 59/500
Epoch 00059: val_loss did not improve from 0.01425
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0582 - val_loss: 0.0143 - val_mse: 0.0143 - val_mae: 0.0932 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 60/500
Epoch 00060: val_loss improved from 0.01425 to 0.01413, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0551 - val_loss: 0.0141 - val_mse: 0.0141 - val_mae: 0.0927 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 61/500
Epoch 00061: val_loss improved from 0.01413 to 0.01405, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0581 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0924 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 62/500
Epoch 00062: val_loss improved from 0.01405 to 0.01401, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0564 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0923 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 63/500
Epoch 00063: val_loss improved from 0.01401 to 0.01390, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0579 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0920 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 64/500
Epoch 00064: val_loss improved from 0.01390 to 0.01384, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0588 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0918 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 65/500
Epoch 00065: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0562 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0920 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 66/500
Epoch 00066: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0554 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0919 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 67/500
Epoch 00067: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0573 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0921 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 68/500
Epoch 00068: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0571 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0920 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 69/500
Epoch 00069: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0581 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0917 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 70/500
Epoch 00070: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0057 - mse: 0.0057 - mae: 0.0587 - val_loss: 0.0139 - val_mse: 0.0139 - val_mae: 0.0917 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 71/500
Epoch 00071: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0140 - val_mse: 0.0140 - val_mae: 0.0921 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 72/500
Epoch 00072: val_loss did not improve from 0.01384
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0916 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step
Epoch 73/500
Epoch 00073: val_loss improved from 0.01384 to 0.01367, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0565 - val_loss: 0.0137 - val_mse: 0.0137 - val_mae: 0.0911 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 74/500
Epoch 00074: val_loss did not improve from 0.01367
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0569 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0915 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 75/500
Epoch 00075: val_loss did not improve from 0.01367
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0564 - val_loss: 0.0138 - val_mse: 0.0138 - val_mae: 0.0914 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 76/500
Epoch 00076: val_loss improved from 0.01367 to 0.01366, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0055 - mse: 0.0055 - mae: 0.0572 - val_loss: 0.0137 - val_mse: 0.0137 - val_mae: 0.0910 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 77/500
Epoch 00077: val_loss did not improve from 0.01366
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0547 - val_loss: 0.0137 - val_mse: 0.0137 - val_mae: 0.0910 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 78/500
Epoch 00078: val_loss did not improve from 0.01366
43/43 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0566 - val_loss: 0.0137 - val_mse: 0.0137 - val_mae: 0.0909 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 79/500
Epoch 00079: val_loss improved from 0.01366 to 0.01358, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.0136 - val_mse: 0.0136 - val_mae: 0.0906 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 80/500
Epoch 00080: val_loss improved from 0.01358 to 0.01344, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0540 - val_loss: 0.0134 - val_mse: 0.0134 - val_mae: 0.0901 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 81/500
Epoch 00081: val_loss improved from 0.01344 to 0.01325, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0554 - val_loss: 0.0132 - val_mse: 0.0132 - val_mae: 0.0895 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 82/500
Epoch 00082: val_loss improved from 0.01325 to 0.01313, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0576 - val_loss: 0.0131 - val_mse: 0.0131 - val_mae: 0.0891 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 83/500
Epoch 00083: val_loss improved from 0.01313 to 0.01301, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0559 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0888 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 84/500
Epoch 00084: val_loss improved from 0.01301 to 0.01299, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0557 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0887 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 85/500
Epoch 00085: val_loss did not improve from 0.01299
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0130 - val_mse: 0.0130 - val_mae: 0.0888 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 86/500
Epoch 00086: val_loss improved from 0.01299 to 0.01282, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0550 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0881 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 87/500
Epoch 00087: val_loss did not improve from 0.01282
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0549 - val_loss: 0.0128 - val_mse: 0.0128 - val_mae: 0.0882 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 88/500
Epoch 00088: val_loss did not improve from 0.01282
43/43 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0560 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0883 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 89/500
Epoch 00089: val_loss did not improve from 0.01282
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0532 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0882 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 90/500
Epoch 00090: val_loss did not improve from 0.01282
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0569 - val_loss: 0.0129 - val_mse: 0.0129 - val_mae: 0.0882 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 91/500
Epoch 00091: val_loss improved from 0.01282 to 0.01271, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0546 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0877 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 92/500
Epoch 00092: val_loss improved from 0.01271 to 0.01270, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0542 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0877 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 93/500
Epoch 00093: val_loss improved from 0.01270 to 0.01266, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0557 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0875 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 94/500
Epoch 00094: val_loss did not improve from 0.01266
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0558 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0875 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 95/500
Epoch 00095: val_loss improved from 0.01266 to 0.01258, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0540 - val_loss: 0.0126 - val_mse: 0.0126 - val_mae: 0.0872 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 96/500
Epoch 00096: val_loss did not improve from 0.01258
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0519 - val_loss: 0.0127 - val_mse: 0.0127 - val_mae: 0.0874 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 97/500
Epoch 00097: val_loss did not improve from 0.01258
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0548 - val_loss: 0.0126 - val_mse: 0.0126 - val_mae: 0.0872 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 98/500
Epoch 00098: val_loss improved from 0.01258 to 0.01236, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0561 - val_loss: 0.0124 - val_mse: 0.0124 - val_mae: 0.0864 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 99/500
Epoch 00099: val_loss improved from 0.01236 to 0.01215, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0568 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0859 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step
Epoch 100/500
Epoch 00100: val_loss improved from 0.01215 to 0.01210, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0545 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0857 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 101/500
Epoch 00101: val_loss did not improve from 0.01210
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0547 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0857 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 102/500
Epoch 00102: val_loss did not improve from 0.01210
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0534 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0857 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 103/500
Epoch 00103: val_loss improved from 0.01210 to 0.01206, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0053 - mse: 0.0053 - mae: 0.0572 - val_loss: 0.0121 - val_mse: 0.0121 - val_mae: 0.0855 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 104/500
Epoch 00104: val_loss improved from 0.01206 to 0.01204, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0550 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0855 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 105/500
Epoch 00105: val_loss improved from 0.01204 to 0.01196, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0544 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0852 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 106/500
Epoch 00106: val_loss improved from 0.01196 to 0.01195, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0852 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 107/500
Epoch 00107: val_loss improved from 0.01195 to 0.01194, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0528 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0851 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 108/500
Epoch 00108: val_loss did not improve from 0.01194
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0535 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0852 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 109/500
Epoch 00109: val_loss did not improve from 0.01194
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0530 - val_loss: 0.0120 - val_mse: 0.0120 - val_mae: 0.0854 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 110/500
Epoch 00110: val_loss improved from 0.01194 to 0.01190, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0850 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 111/500
Epoch 00111: val_loss improved from 0.01190 to 0.01183, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0546 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0848 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 112/500
Epoch 00112: val_loss did not improve from 0.01183
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0535 - val_loss: 0.0119 - val_mse: 0.0119 - val_mae: 0.0849 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 113/500
Epoch 00113: val_loss improved from 0.01183 to 0.01178, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0539 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0845 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 114/500
Epoch 00114: val_loss improved from 0.01178 to 0.01175, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0545 - val_loss: 0.0118 - val_mse: 0.0118 - val_mae: 0.0844 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 115/500
Epoch 00115: val_loss improved from 0.01175 to 0.01168, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0525 - val_loss: 0.0117 - val_mse: 0.0117 - val_mae: 0.0842 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 116/500
Epoch 00116: val_loss improved from 0.01168 to 0.01154, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0554 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0838 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 117/500
Epoch 00117: val_loss improved from 0.01154 to 0.01131, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0542 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0832 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 118/500
Epoch 00118: val_loss did not improve from 0.01131
43/43 - 0s - loss: 0.0054 - mse: 0.0054 - mae: 0.0562 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0831 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 119/500
Epoch 00119: val_loss improved from 0.01131 to 0.01127, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0830 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 120/500
Epoch 00120: val_loss did not improve from 0.01127
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0543 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0833 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 121/500
Epoch 00121: val_loss did not improve from 0.01127
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0837 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 122/500
Epoch 00122: val_loss did not improve from 0.01127
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0522 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0833 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 123/500
Epoch 00123: val_loss improved from 0.01127 to 0.01116, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0511 - val_loss: 0.0112 - val_mse: 0.0112 - val_mae: 0.0825 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 124/500
Epoch 00124: val_loss improved from 0.01116 to 0.01114, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0825 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 125/500
Epoch 00125: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0529 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0828 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 126/500
Epoch 00126: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0835 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 127/500
Epoch 00127: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0528 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0833 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 128/500
Epoch 00128: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0535 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0834 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 129/500
Epoch 00129: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0524 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0830 - lr: 1.0000e-05 - 240ms/epoch - 6ms/step
Epoch 130/500
Epoch 00130: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0538 - val_loss: 0.0116 - val_mse: 0.0116 - val_mae: 0.0833 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 131/500
Epoch 00131: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0539 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0827 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 132/500
Epoch 00132: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0505 - val_loss: 0.0114 - val_mse: 0.0114 - val_mae: 0.0827 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 133/500
Epoch 00133: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0046 - mse: 0.0046 - mae: 0.0537 - val_loss: 0.0115 - val_mse: 0.0115 - val_mae: 0.0828 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 134/500
Epoch 00134: val_loss did not improve from 0.01114
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0529 - val_loss: 0.0113 - val_mse: 0.0113 - val_mae: 0.0822 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 135/500
Epoch 00135: val_loss improved from 0.01114 to 0.01112, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0509 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0817 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 136/500
Epoch 00136: val_loss improved from 0.01112 to 0.01110, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0536 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0816 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 137/500
Epoch 00137: val_loss improved from 0.01110 to 0.01092, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0810 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 138/500
Epoch 00138: val_loss improved from 0.01092 to 0.01086, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0517 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0808 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 139/500
Epoch 00139: val_loss improved from 0.01086 to 0.01068, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0803 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 140/500
Epoch 00140: val_loss improved from 0.01068 to 0.01065, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0528 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0802 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 141/500
Epoch 00141: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0048 - mse: 0.0048 - mae: 0.0544 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0806 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 142/500
Epoch 00142: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0519 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0805 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 143/500
Epoch 00143: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0505 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0808 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 144/500
Epoch 00144: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0812 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 145/500
Epoch 00145: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0519 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0810 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 146/500
Epoch 00146: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0807 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 147/500
Epoch 00147: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0497 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0804 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 148/500
Epoch 00148: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0806 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 149/500
Epoch 00149: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0520 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0812 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 150/500
Epoch 00150: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0516 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0805 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 151/500
Epoch 00151: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0804 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 152/500
Epoch 00152: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0805 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 153/500
Epoch 00153: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0491 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0802 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 154/500
Epoch 00154: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0527 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0804 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 155/500
Epoch 00155: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0803 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 156/500
Epoch 00156: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0507 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0807 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 157/500
Epoch 00157: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0498 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0808 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 158/500
Epoch 00158: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0110 - val_mse: 0.0110 - val_mae: 0.0807 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 159/500
Epoch 00159: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0111 - val_mse: 0.0111 - val_mae: 0.0809 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 160/500
Epoch 00160: val_loss did not improve from 0.01065
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0507 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0802 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 161/500
Epoch 00161: val_loss improved from 0.01065 to 0.01060, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0513 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0792 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 162/500
Epoch 00162: val_loss improved from 0.01060 to 0.01048, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0486 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0788 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 163/500
Epoch 00163: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0049 - mse: 0.0049 - mae: 0.0551 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0791 - lr: 1.0000e-05 - 186ms/epoch - 4ms/step
Epoch 164/500
Epoch 00164: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0506 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0796 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 165/500
Epoch 00165: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0109 - val_mse: 0.0109 - val_mae: 0.0800 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 166/500
Epoch 00166: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0505 - val_loss: 0.0108 - val_mse: 0.0108 - val_mae: 0.0798 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 167/500
Epoch 00167: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0518 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0792 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 168/500
Epoch 00168: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0512 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0791 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 169/500
Epoch 00169: val_loss did not improve from 0.01048
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0514 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0790 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 170/500
Epoch 00170: val_loss improved from 0.01048 to 0.01036, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0781 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 171/500
Epoch 00171: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0786 - lr: 1.0000e-05 - 206ms/epoch - 5ms/step
Epoch 172/500
Epoch 00172: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0791 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 173/500
Epoch 00173: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0490 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0788 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 174/500
Epoch 00174: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0513 - val_loss: 0.0106 - val_mse: 0.0106 - val_mae: 0.0790 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 175/500
Epoch 00175: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0510 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0784 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 176/500
Epoch 00176: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0471 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0783 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 177/500
Epoch 00177: val_loss did not improve from 0.01036
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0501 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0781 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 178/500
Epoch 00178: val_loss improved from 0.01036 to 0.01027, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0497 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0776 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 179/500
Epoch 00179: val_loss improved from 0.01027 to 0.01020, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0540 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0773 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 180/500
Epoch 00180: val_loss did not improve from 0.01020
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0502 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0778 - lr: 1.0000e-05 - 197ms/epoch - 5ms/step
Epoch 181/500
Epoch 00181: val_loss did not improve from 0.01020
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0477 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0783 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 182/500
Epoch 00182: val_loss did not improve from 0.01020
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0480 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0791 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 183/500
Epoch 00183: val_loss did not improve from 0.01020
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0494 - val_loss: 0.0107 - val_mse: 0.0107 - val_mae: 0.0790 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 184/500
Epoch 00184: val_loss did not improve from 0.01020
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0510 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0782 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 185/500
Epoch 00185: val_loss did not improve from 0.01020
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0514 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0774 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 186/500
Epoch 00186: val_loss improved from 0.01020 to 0.01009, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0510 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0768 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 187/500
Epoch 00187: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0491 - val_loss: 0.0101 - val_mse: 0.0101 - val_mae: 0.0769 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 188/500
Epoch 00188: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0498 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0772 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 189/500
Epoch 00189: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0474 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0780 - lr: 1.0000e-05 - 190ms/epoch - 4ms/step
Epoch 190/500
Epoch 00190: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0105 - val_mse: 0.0105 - val_mae: 0.0780 - lr: 1.0000e-05 - 237ms/epoch - 6ms/step
Epoch 191/500
Epoch 00191: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0501 - val_loss: 0.0104 - val_mse: 0.0104 - val_mae: 0.0776 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 192/500
Epoch 00192: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0490 - val_loss: 0.0103 - val_mse: 0.0103 - val_mae: 0.0772 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 193/500
Epoch 00193: val_loss did not improve from 0.01009
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0502 - val_loss: 0.0102 - val_mse: 0.0102 - val_mae: 0.0769 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 194/500
Epoch 00194: val_loss improved from 0.01009 to 0.00989, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0481 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0758 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 195/500
Epoch 00195: val_loss improved from 0.00989 to 0.00975, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0474 - val_loss: 0.0097 - val_mse: 0.0097 - val_mae: 0.0754 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 196/500
Epoch 00196: val_loss did not improve from 0.00975
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0498 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0756 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 197/500
Epoch 00197: val_loss did not improve from 0.00975
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0494 - val_loss: 0.0099 - val_mse: 0.0099 - val_mae: 0.0759 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 198/500
Epoch 00198: val_loss did not improve from 0.00975
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0499 - val_loss: 0.0098 - val_mse: 0.0098 - val_mae: 0.0756 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 199/500
Epoch 00199: val_loss improved from 0.00975 to 0.00951, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0517 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0745 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 200/500
Epoch 00200: val_loss improved from 0.00951 to 0.00950, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0745 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 201/500
Epoch 00201: val_loss did not improve from 0.00950
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0469 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0744 - lr: 1.0000e-05 - 188ms/epoch - 4ms/step
Epoch 202/500
Epoch 00202: val_loss did not improve from 0.00950
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0492 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0744 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 203/500
Epoch 00203: val_loss improved from 0.00950 to 0.00941, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0507 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0740 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 204/500
Epoch 00204: val_loss did not improve from 0.00941
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0743 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 205/500
Epoch 00205: val_loss did not improve from 0.00941
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0475 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0744 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 206/500
Epoch 00206: val_loss improved from 0.00941 to 0.00929, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0482 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0737 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 207/500
Epoch 00207: val_loss did not improve from 0.00929
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0476 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0737 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 208/500
Epoch 00208: val_loss improved from 0.00929 to 0.00922, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0464 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0734 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step
Epoch 209/500
Epoch 00209: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0735 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 210/500
Epoch 00210: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0734 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 211/500
Epoch 00211: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0499 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0737 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 212/500
Epoch 00212: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0498 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0736 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 213/500
Epoch 00213: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0485 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0734 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 214/500
Epoch 00214: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0492 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0737 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 215/500
Epoch 00215: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0507 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0735 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step
Epoch 216/500
Epoch 00216: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0506 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0733 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 217/500
Epoch 00217: val_loss improved from 0.00922 to 0.00922, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0732 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 218/500
Epoch 00218: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0494 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0733 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 219/500
Epoch 00219: val_loss did not improve from 0.00922
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0478 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0734 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 220/500
Epoch 00220: val_loss improved from 0.00922 to 0.00917, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0488 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0729 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 221/500
Epoch 00221: val_loss improved from 0.00917 to 0.00906, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0500 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0725 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 222/500
Epoch 00222: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0473 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0726 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 223/500
Epoch 00223: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0504 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0726 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 224/500
Epoch 00224: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0491 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0726 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 225/500
Epoch 00225: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0490 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0730 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 226/500
Epoch 00226: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0498 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0728 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 227/500
Epoch 00227: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0493 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0729 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 228/500
Epoch 00228: val_loss did not improve from 0.00906
43/43 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0725 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 229/500
Epoch 00229: val_loss improved from 0.00906 to 0.00901, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0721 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 230/500
Epoch 00230: val_loss did not improve from 0.00901
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0460 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0723 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 231/500
Epoch 00231: val_loss improved from 0.00901 to 0.00893, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0718 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 232/500
Epoch 00232: val_loss improved from 0.00893 to 0.00890, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0500 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0717 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 233/500
Epoch 00233: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0716 - lr: 1.0000e-05 - 214ms/epoch - 5ms/step
Epoch 234/500
Epoch 00234: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0460 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0719 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 235/500
Epoch 00235: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0717 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 236/500
Epoch 00236: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0476 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0722 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 237/500
Epoch 00237: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0474 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0725 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 238/500
Epoch 00238: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0486 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0726 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 239/500
Epoch 00239: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0468 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0731 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 240/500
Epoch 00240: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0471 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0736 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 241/500
Epoch 00241: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0472 - val_loss: 0.0096 - val_mse: 0.0096 - val_mae: 0.0741 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 242/500
Epoch 00242: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0446 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0738 - lr: 1.0000e-05 - 198ms/epoch - 5ms/step
Epoch 243/500
Epoch 00243: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0739 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 244/500
Epoch 00244: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0486 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0736 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 245/500
Epoch 00245: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0474 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0735 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 246/500
Epoch 00246: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0725 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 247/500
Epoch 00247: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0491 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0730 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 248/500
Epoch 00248: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0467 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0732 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 249/500
Epoch 00249: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0485 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0731 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 250/500
Epoch 00250: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0468 - val_loss: 0.0095 - val_mse: 0.0095 - val_mae: 0.0735 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 251/500
Epoch 00251: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0484 - val_loss: 0.0094 - val_mse: 0.0094 - val_mae: 0.0729 - lr: 1.0000e-05 - 245ms/epoch - 6ms/step
Epoch 252/500
Epoch 00252: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0467 - val_loss: 0.0093 - val_mse: 0.0093 - val_mae: 0.0726 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 253/500
Epoch 00253: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0092 - val_mse: 0.0092 - val_mae: 0.0722 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 254/500
Epoch 00254: val_loss did not improve from 0.00890
43/43 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0510 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0712 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 255/500
Epoch 00255: val_loss improved from 0.00890 to 0.00889, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0464 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0710 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 256/500
Epoch 00256: val_loss did not improve from 0.00889
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0091 - val_mse: 0.0091 - val_mae: 0.0719 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 257/500
Epoch 00257: val_loss did not improve from 0.00889
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0713 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 258/500
Epoch 00258: val_loss did not improve from 0.00889
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0467 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0712 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 259/500
Epoch 00259: val_loss improved from 0.00889 to 0.00887, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0489 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0709 - lr: 1.0000e-05 - 259ms/epoch - 6ms/step
Epoch 260/500
Epoch 00260: val_loss did not improve from 0.00887
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0483 - val_loss: 0.0090 - val_mse: 0.0090 - val_mae: 0.0712 - lr: 1.0000e-05 - 177ms/epoch - 4ms/step
Epoch 261/500
Epoch 00261: val_loss did not improve from 0.00887
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0460 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0709 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 262/500
Epoch 00262: val_loss improved from 0.00887 to 0.00884, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0471 - val_loss: 0.0088 - val_mse: 0.0088 - val_mae: 0.0707 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 263/500
Epoch 00263: val_loss improved from 0.00884 to 0.00868, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0459 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0701 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 264/500
Epoch 00264: val_loss did not improve from 0.00868
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0476 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0702 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 265/500
Epoch 00265: val_loss did not improve from 0.00868
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0474 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0702 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 266/500
Epoch 00266: val_loss improved from 0.00868 to 0.00855, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0470 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0696 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 267/500
Epoch 00267: val_loss improved from 0.00855 to 0.00846, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0467 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0693 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 268/500
Epoch 00268: val_loss improved from 0.00846 to 0.00844, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0470 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0693 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 269/500
Epoch 00269: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0459 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0697 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 270/500
Epoch 00270: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0483 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0695 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 271/500
Epoch 00271: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0475 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0698 - lr: 1.0000e-05 - 176ms/epoch - 4ms/step
Epoch 272/500
Epoch 00272: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0481 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0695 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 273/500
Epoch 00273: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0426 - val_loss: 0.0089 - val_mse: 0.0089 - val_mae: 0.0709 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 274/500
Epoch 00274: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0458 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0702 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 275/500
Epoch 00275: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0454 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0701 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 276/500
Epoch 00276: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0696 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 277/500
Epoch 00277: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0471 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0697 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 278/500
Epoch 00278: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0475 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0699 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 279/500
Epoch 00279: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0460 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0700 - lr: 1.0000e-05 - 196ms/epoch - 5ms/step
Epoch 280/500
Epoch 00280: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0695 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 281/500
Epoch 00281: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0488 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0696 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 282/500
Epoch 00282: val_loss did not improve from 0.00844
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0477 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0691 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 283/500
Epoch 00283: val_loss improved from 0.00844 to 0.00837, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0469 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0687 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 284/500
Epoch 00284: val_loss did not improve from 0.00837
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0476 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0690 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 285/500
Epoch 00285: val_loss did not improve from 0.00837
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0466 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0695 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 286/500
Epoch 00286: val_loss did not improve from 0.00837
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0456 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0687 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 287/500
Epoch 00287: val_loss improved from 0.00837 to 0.00818, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0485 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0679 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 288/500
Epoch 00288: val_loss improved from 0.00818 to 0.00815, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0492 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0678 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 289/500
Epoch 00289: val_loss did not improve from 0.00815
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0453 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0683 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 290/500
Epoch 00290: val_loss did not improve from 0.00815
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0683 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 291/500
Epoch 00291: val_loss improved from 0.00815 to 0.00810, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0458 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0677 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 292/500
Epoch 00292: val_loss improved from 0.00810 to 0.00798, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0466 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0673 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 293/500
Epoch 00293: val_loss did not improve from 0.00798
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0473 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0678 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 294/500
Epoch 00294: val_loss did not improve from 0.00798
43/43 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0483 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0675 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 295/500
Epoch 00295: val_loss improved from 0.00798 to 0.00792, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0462 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0670 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 296/500
Epoch 00296: val_loss did not improve from 0.00792
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0471 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0671 - lr: 1.0000e-05 - 226ms/epoch - 5ms/step
Epoch 297/500
Epoch 00297: val_loss improved from 0.00792 to 0.00790, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0477 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0670 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 298/500
Epoch 00298: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0464 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0670 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 299/500
Epoch 00299: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0479 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0675 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 300/500
Epoch 00300: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0459 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0678 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 301/500
Epoch 00301: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0458 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0671 - lr: 1.0000e-05 - 200ms/epoch - 5ms/step
Epoch 302/500
Epoch 00302: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0463 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0669 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 303/500
Epoch 00303: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0469 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0672 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 304/500
Epoch 00304: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0470 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0679 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 305/500
Epoch 00305: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0472 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0673 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 306/500
Epoch 00306: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0461 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0670 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 307/500
Epoch 00307: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0435 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0670 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 308/500
Epoch 00308: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0464 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0673 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step
Epoch 309/500
Epoch 00309: val_loss did not improve from 0.00790
43/43 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0480 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0668 - lr: 1.0000e-05 - 189ms/epoch - 4ms/step
Epoch 310/500
Epoch 00310: val_loss improved from 0.00790 to 0.00778, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0446 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0663 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 311/500
Epoch 00311: val_loss improved from 0.00778 to 0.00778, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0663 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 312/500
Epoch 00312: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0463 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0667 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 313/500
Epoch 00313: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0455 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0675 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 314/500
Epoch 00314: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0439 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0670 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 315/500
Epoch 00315: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0480 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0675 - lr: 1.0000e-05 - 210ms/epoch - 5ms/step
Epoch 316/500
Epoch 00316: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0450 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0681 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 317/500
Epoch 00317: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0461 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0676 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 318/500
Epoch 00318: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0464 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0678 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 319/500
Epoch 00319: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0462 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0676 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 320/500
Epoch 00320: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0467 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0678 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 321/500
Epoch 00321: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0465 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0684 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 322/500
Epoch 00322: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0465 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0683 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 323/500
Epoch 00323: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0454 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0685 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 324/500
Epoch 00324: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0436 - val_loss: 0.0087 - val_mse: 0.0087 - val_mae: 0.0699 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 325/500
Epoch 00325: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0465 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0690 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 326/500
Epoch 00326: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0476 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0675 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 327/500
Epoch 00327: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0450 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0686 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 328/500
Epoch 00328: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0460 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0684 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step
Epoch 329/500
Epoch 00329: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0678 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 330/500
Epoch 00330: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0449 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0672 - lr: 1.0000e-05 - 175ms/epoch - 4ms/step
Epoch 331/500
Epoch 00331: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0444 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0667 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 332/500
Epoch 00332: val_loss did not improve from 0.00778
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0665 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 333/500
Epoch 00333: val_loss improved from 0.00778 to 0.00772, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0459 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0656 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 334/500
Epoch 00334: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0487 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0659 - lr: 1.0000e-05 - 215ms/epoch - 5ms/step
Epoch 335/500
Epoch 00335: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0466 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0668 - lr: 1.0000e-05 - 185ms/epoch - 4ms/step
Epoch 336/500
Epoch 00336: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0446 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0671 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 337/500
Epoch 00337: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0470 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0666 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 338/500
Epoch 00338: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0479 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0670 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 339/500
Epoch 00339: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0451 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0671 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 340/500
Epoch 00340: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0672 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 341/500
Epoch 00341: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0442 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0675 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 342/500
Epoch 00342: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0472 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0674 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 343/500
Epoch 00343: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0447 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0661 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 344/500
Epoch 00344: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0461 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0665 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 345/500
Epoch 00345: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0661 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 346/500
Epoch 00346: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0659 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 347/500
Epoch 00347: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0474 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0658 - lr: 1.0000e-05 - 184ms/epoch - 4ms/step
Epoch 348/500
Epoch 00348: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0466 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0654 - lr: 1.0000e-05 - 193ms/epoch - 4ms/step
Epoch 349/500
Epoch 00349: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0655 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 350/500
Epoch 00350: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0455 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0661 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 351/500
Epoch 00351: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0456 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0664 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 352/500
Epoch 00352: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0676 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 353/500
Epoch 00353: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0459 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0669 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 354/500
Epoch 00354: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0658 - lr: 1.0000e-05 - 257ms/epoch - 6ms/step
Epoch 355/500
Epoch 00355: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0467 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0660 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 356/500
Epoch 00356: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0455 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0660 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step
Epoch 357/500
Epoch 00357: val_loss did not improve from 0.00772
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0452 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0656 - lr: 1.0000e-05 - 205ms/epoch - 5ms/step
Epoch 358/500
Epoch 00358: val_loss improved from 0.00772 to 0.00762, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0443 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0649 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 359/500
Epoch 00359: val_loss did not improve from 0.00762
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0451 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0649 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 360/500
Epoch 00360: val_loss did not improve from 0.00762
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0446 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0658 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 361/500
Epoch 00361: val_loss did not improve from 0.00762
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0439 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0653 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 362/500
Epoch 00362: val_loss did not improve from 0.00762
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0469 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0649 - lr: 1.0000e-05 - 202ms/epoch - 5ms/step
Epoch 363/500
Epoch 00363: val_loss did not improve from 0.00762
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0460 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0658 - lr: 1.0000e-05 - 195ms/epoch - 5ms/step
Epoch 364/500
Epoch 00364: val_loss did not improve from 0.00762
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0660 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 365/500
Epoch 00365: val_loss improved from 0.00762 to 0.00750, saving model to LSTM7.h5
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0428 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0643 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 366/500
Epoch 00366: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0449 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0645 - lr: 1.0000e-05 - 169ms/epoch - 4ms/step
Epoch 367/500
Epoch 00367: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0461 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0651 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 368/500
Epoch 00368: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0452 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0645 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 369/500
Epoch 00369: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0654 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 370/500
Epoch 00370: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0432 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0660 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 371/500
Epoch 00371: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0439 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0659 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 372/500
Epoch 00372: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0429 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0649 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 373/500
Epoch 00373: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0477 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0656 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 374/500
Epoch 00374: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0457 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0651 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 375/500
Epoch 00375: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0458 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0661 - lr: 1.0000e-05 - 204ms/epoch - 5ms/step
Epoch 376/500
Epoch 00376: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0443 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0661 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 377/500
Epoch 00377: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0445 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0658 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 378/500
Epoch 00378: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0448 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0655 - lr: 1.0000e-05 - 183ms/epoch - 4ms/step
Epoch 379/500
Epoch 00379: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0645 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 380/500
Epoch 00380: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0451 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0646 - lr: 1.0000e-05 - 180ms/epoch - 4ms/step
Epoch 381/500
Epoch 00381: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0435 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0647 - lr: 1.0000e-05 - 229ms/epoch - 5ms/step
Epoch 382/500
Epoch 00382: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0447 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0642 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step
Epoch 383/500
Epoch 00383: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0454 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0644 - lr: 1.0000e-05 - 236ms/epoch - 5ms/step
Epoch 384/500
Epoch 00384: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0451 - val_loss: 0.0075 - val_mse: 0.0075 - val_mae: 0.0643 - lr: 1.0000e-05 - 182ms/epoch - 4ms/step
Epoch 385/500
Epoch 00385: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0448 - val_loss: 0.0076 - val_mse: 0.0076 - val_mae: 0.0647 - lr: 1.0000e-05 - 194ms/epoch - 5ms/step
Epoch 386/500
Epoch 00386: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0431 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0655 - lr: 1.0000e-05 - 211ms/epoch - 5ms/step
Epoch 387/500
Epoch 00387: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0445 - val_loss: 0.0079 - val_mse: 0.0079 - val_mae: 0.0659 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 388/500
Epoch 00388: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0430 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0653 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 389/500
Epoch 00389: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0428 - val_loss: 0.0077 - val_mse: 0.0077 - val_mae: 0.0650 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 390/500
Epoch 00390: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0469 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0662 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 391/500
Epoch 00391: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0030 - mse: 0.0030 - mae: 0.0435 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0675 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 392/500
Epoch 00392: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0451 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0680 - lr: 1.0000e-05 - 207ms/epoch - 5ms/step
Epoch 393/500
Epoch 00393: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0035 - mse: 0.0035 - mae: 0.0457 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0682 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 394/500
Epoch 00394: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0443 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0686 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 395/500
Epoch 00395: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0452 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0687 - lr: 1.0000e-05 - 208ms/epoch - 5ms/step
Epoch 396/500
Epoch 00396: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0439 - val_loss: 0.0085 - val_mse: 0.0085 - val_mae: 0.0686 - lr: 1.0000e-05 - 223ms/epoch - 5ms/step
Epoch 397/500
Epoch 00397: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0451 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0673 - lr: 1.0000e-05 - 212ms/epoch - 5ms/step
Epoch 398/500
Epoch 00398: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0446 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0667 - lr: 1.0000e-05 - 179ms/epoch - 4ms/step
Epoch 399/500
Epoch 00399: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0469 - val_loss: 0.0078 - val_mse: 0.0078 - val_mae: 0.0656 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step
Epoch 400/500
Epoch 00400: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0437 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0663 - lr: 1.0000e-05 - 201ms/epoch - 5ms/step
Epoch 401/500
Epoch 00401: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0441 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0680 - lr: 1.0000e-05 - 187ms/epoch - 4ms/step
Epoch 402/500
Epoch 00402: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0440 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0677 - lr: 1.0000e-05 - 243ms/epoch - 6ms/step
Epoch 403/500
Epoch 00403: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0461 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0672 - lr: 1.0000e-05 - 203ms/epoch - 5ms/step
Epoch 404/500
Epoch 00404: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0437 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0661 - lr: 1.0000e-05 - 209ms/epoch - 5ms/step
Epoch 405/500
Epoch 00405: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0435 - val_loss: 0.0080 - val_mse: 0.0080 - val_mae: 0.0662 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step
Epoch 406/500
Epoch 00406: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0438 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0665 - lr: 1.0000e-05 - 230ms/epoch - 5ms/step
Epoch 407/500
Epoch 00407: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0451 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0678 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 408/500
Epoch 00408: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0429 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0690 - lr: 1.0000e-05 - 191ms/epoch - 4ms/step
Epoch 409/500
Epoch 00409: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0033 - mse: 0.0033 - mae: 0.0444 - val_loss: 0.0086 - val_mse: 0.0086 - val_mae: 0.0689 - lr: 1.0000e-05 - 199ms/epoch - 5ms/step
Epoch 410/500
Epoch 00410: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0462 - val_loss: 0.0084 - val_mse: 0.0084 - val_mae: 0.0680 - lr: 1.0000e-05 - 192ms/epoch - 4ms/step
Epoch 411/500
Epoch 00411: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0471 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0673 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step
Epoch 412/500
Epoch 00412: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0031 - mse: 0.0031 - mae: 0.0440 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0675 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 413/500
Epoch 00413: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0029 - mse: 0.0029 - mae: 0.0423 - val_loss: 0.0083 - val_mse: 0.0083 - val_mae: 0.0674 - lr: 1.0000e-05 - 178ms/epoch - 4ms/step
Epoch 414/500
Epoch 00414: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0447 - val_loss: 0.0082 - val_mse: 0.0082 - val_mae: 0.0670 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step
Epoch 415/500
Epoch 00415: val_loss did not improve from 0.00750
43/43 - 0s - loss: 0.0032 - mse: 0.0032 - mae: 0.0434 - val_loss: 0.0081 - val_mse: 0.0081 - val_mae: 0.0668 - lr: 1.0000e-05 - 181ms/epoch - 4ms/step
Epoch 00415: early stopping
SMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 44.65212926265077
RMSE: 6.682224873696692
MAPE: 5.204686480071648
EMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.25% Accuracy
MSE: 45.539825469272486
RMSE: 6.748320196113436
MAPE: 5.43245952292463
WMA
Prediction vs Close: 52.99% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 42.30488040231578
RMSE: 6.504220199402522
MAPE: 5.010195929360332
DEMA
Prediction vs Close: 55.6% Accuracy
Prediction vs Prediction: 54.48% Accuracy
MSE: 23.305922116020078
RMSE: 4.827620751055335
MAPE: 3.7452201197397774
KAMA
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 18.082341646298453
RMSE: 4.252333670621163
MAPE: 3.4333194517527637
MIDPOINT
Prediction vs Close: 51.49% Accuracy
Prediction vs Prediction: 50.0% Accuracy
MSE: 91.59813707600279
RMSE: 9.57069156727991
MAPE: 7.718313236319782
T3
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 145.19295971499469
RMSE: 12.049604131049065
MAPE: 9.875885491811884
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16736.686, Time=3.46 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-15327.143, Time=3.48 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15166.078, Time=7.33 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14962.662, Time=14.27 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16731.606, Time=5.59 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14848.952, Time=10.35 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16921.745, Time=6.06 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14958.662, Time=18.28 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15003.046, Time=13.47 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16752.122, Time=4.06 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 86.368 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8492.873
Date: Sun, 12 Dec 2021 AIC -16921.745
Time: 19:05:25 BIC -16771.638
Sample: 0 HQIC -16864.098
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.277e-08 0.001 3.25e-05 1.000 -0.001 0.001
x2 2.286e-08 0.001 2.5e-05 1.000 -0.002 0.002
x3 2.286e-08 0.001 3.44e-05 1.000 -0.001 0.001
x4 1.0000 0.000 3190.279 0.000 0.999 1.001
x5 2.174e-08 0.001 4.21e-05 1.000 -0.001 0.001
x6 6.124e-09 3.05e-05 0.000 1.000 -5.97e-05 5.97e-05
x7 2.246e-08 0.001 1.67e-05 1.000 -0.003 0.003
x8 -0.0013 0.001 -1.669 0.095 -0.003 0.000
x9 -5.239e-09 0.000 -1.79e-05 1.000 -0.001 0.001
x10 0.0001 9.9e-05 1.396 0.163 -5.59e-05 0.000
x11 -0.0001 0.001 -0.177 0.859 -0.002 0.001
x12 0.0012 0.001 1.426 0.154 -0.000 0.003
x13 2.284e-08 0.000 6.75e-05 1.000 -0.001 0.001
x14 6.258e-08 0.001 5.07e-05 1.000 -0.002 0.002
x15 2.215e-08 0.000 0.000 1.000 -0.000 0.000
x16 3.243e-08 0.000 0.000 1.000 -0.001 0.001
x17 2.22e-08 0.000 0.000 1.000 -0.000 0.000
x18 7.527e-09 0.000 1.67e-05 1.000 -0.001 0.001
x19 2.477e-08 0.000 0.000 1.000 -0.000 0.000
x20 -2.348e-08 0.000 -5.78e-05 1.000 -0.001 0.001
x21 2.718e-08 5.8e-05 0.000 1.000 -0.000 0.000
x22 -2.176e-10 0.000 -5.27e-07 1.000 -0.001 0.001
x23 -2.69e-09 8.49e-05 -3.17e-05 1.000 -0.000 0.000
x24 -4.516e-08 7.24e-06 -0.006 0.995 -1.42e-05 1.41e-05
x25 -4.213e-08 2.81e-05 -0.002 0.999 -5.51e-05 5.5e-05
x26 7.946e-08 0.001 0.000 1.000 -0.001 0.001
x27 4.528e-08 0.001 6.21e-05 1.000 -0.001 0.001
x28 5.92e-08 0.001 4.12e-05 1.000 -0.003 0.003
x29 3.468e-08 0.000 7.06e-05 1.000 -0.001 0.001
ma.L1 -1.3739 4.46e-06 -3.08e+05 0.000 -1.374 -1.374
ma.L2 0.3968 1.4e-05 2.84e+04 0.000 0.397 0.397
sigma2 7.701e-11 7.39e-11 1.043 0.297 -6.78e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 61.47 Jarque-Bera (JB): 5565463.09
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 10.97
Prob(H) (two-sided): 0.00 Kurtosis: 409.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.67e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.03716, saving model to LSTM7.h5
90/90 - 3s - loss: 0.0617 - mse: 0.0617 - mae: 0.2014 - val_loss: 0.0372 - val_mse: 0.0372 - val_mae: 0.1565 - lr: 0.0010 - 3s/epoch - 29ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.03716
90/90 - 0s - loss: 0.0580 - mse: 0.0580 - mae: 0.2030 - val_loss: 0.0943 - val_mse: 0.0943 - val_mae: 0.2265 - lr: 0.0010 - 374ms/epoch - 4ms/step
Epoch 3/500
Epoch 00003: val_loss improved from 0.03716 to 0.03535, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0561 - mse: 0.0561 - mae: 0.1766 - val_loss: 0.0353 - val_mse: 0.0353 - val_mae: 0.1545 - lr: 0.0010 - 387ms/epoch - 4ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.03535
90/90 - 0s - loss: 0.0112 - mse: 0.0112 - mae: 0.0810 - val_loss: 0.0895 - val_mse: 0.0895 - val_mae: 0.2354 - lr: 0.0010 - 353ms/epoch - 4ms/step
Epoch 5/500
Epoch 00005: val_loss improved from 0.03535 to 0.02650, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0121 - mse: 0.0121 - mae: 0.0824 - val_loss: 0.0265 - val_mse: 0.0265 - val_mae: 0.1325 - lr: 0.0010 - 384ms/epoch - 4ms/step
Epoch 6/500
Epoch 00006: val_loss did not improve from 0.02650
90/90 - 0s - loss: 0.0114 - mse: 0.0114 - mae: 0.0815 - val_loss: 0.0968 - val_mse: 0.0968 - val_mae: 0.2570 - lr: 0.0010 - 387ms/epoch - 4ms/step
Epoch 7/500
Epoch 00007: val_loss improved from 0.02650 to 0.02480, saving model to LSTM7.h5
90/90 - 0s - loss: 0.0096 - mse: 0.0096 - mae: 0.0742 - val_loss: 0.0248 - val_mse: 0.0248 - val_mae: 0.1259 - lr: 0.0010 - 383ms/epoch - 4ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0084 - mse: 0.0084 - mae: 0.0713 - val_loss: 0.1028 - val_mse: 0.1028 - val_mae: 0.2678 - lr: 0.0010 - 461ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0076 - mse: 0.0076 - mae: 0.0661 - val_loss: 0.0284 - val_mse: 0.0284 - val_mae: 0.1277 - lr: 0.0010 - 375ms/epoch - 4ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0074 - mse: 0.0074 - mae: 0.0653 - val_loss: 0.0942 - val_mse: 0.0942 - val_mae: 0.2562 - lr: 0.0010 - 365ms/epoch - 4ms/step
Epoch 11/500
Epoch 00011: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0078 - mse: 0.0078 - mae: 0.0677 - val_loss: 0.0346 - val_mse: 0.0346 - val_mae: 0.1364 - lr: 0.0010 - 465ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00012: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0068 - mse: 0.0068 - mae: 0.0622 - val_loss: 0.1167 - val_mse: 0.1167 - val_mae: 0.2958 - lr: 0.0010 - 372ms/epoch - 4ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0127 - mse: 0.0127 - mae: 0.0938 - val_loss: 0.0828 - val_mse: 0.0828 - val_mae: 0.2357 - lr: 1.0000e-04 - 366ms/epoch - 4ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0056 - mse: 0.0056 - mae: 0.0590 - val_loss: 0.0758 - val_mse: 0.0758 - val_mae: 0.2222 - lr: 1.0000e-04 - 470ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0052 - mse: 0.0052 - mae: 0.0578 - val_loss: 0.0708 - val_mse: 0.0708 - val_mae: 0.2123 - lr: 1.0000e-04 - 382ms/epoch - 4ms/step
Epoch 16/500
Epoch 00016: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0050 - mse: 0.0050 - mae: 0.0549 - val_loss: 0.0687 - val_mse: 0.0687 - val_mae: 0.2080 - lr: 1.0000e-04 - 374ms/epoch - 4ms/step
Epoch 17/500
Epoch 00017: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00017: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0051 - mse: 0.0051 - mae: 0.0556 - val_loss: 0.0653 - val_mse: 0.0653 - val_mae: 0.2011 - lr: 1.0000e-04 - 432ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0530 - val_loss: 0.0652 - val_mse: 0.0652 - val_mae: 0.2010 - lr: 1.0000e-05 - 366ms/epoch - 4ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0047 - mse: 0.0047 - mae: 0.0530 - val_loss: 0.0653 - val_mse: 0.0653 - val_mae: 0.2013 - lr: 1.0000e-05 - 371ms/epoch - 4ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0504 - val_loss: 0.0657 - val_mse: 0.0657 - val_mae: 0.2020 - lr: 1.0000e-05 - 369ms/epoch - 4ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0507 - val_loss: 0.0656 - val_mse: 0.0656 - val_mae: 0.2019 - lr: 1.0000e-05 - 368ms/epoch - 4ms/step
Epoch 22/500
Epoch 00022: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00022: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0503 - val_loss: 0.0658 - val_mse: 0.0658 - val_mae: 0.2024 - lr: 1.0000e-05 - 372ms/epoch - 4ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0489 - val_loss: 0.0660 - val_mse: 0.0660 - val_mae: 0.2029 - lr: 1.0000e-05 - 413ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0524 - val_loss: 0.0661 - val_mse: 0.0661 - val_mae: 0.2031 - lr: 1.0000e-05 - 402ms/epoch - 4ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0503 - val_loss: 0.0660 - val_mse: 0.0660 - val_mae: 0.2031 - lr: 1.0000e-05 - 458ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0500 - val_loss: 0.0660 - val_mse: 0.0660 - val_mae: 0.2031 - lr: 1.0000e-05 - 385ms/epoch - 4ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0515 - val_loss: 0.0660 - val_mse: 0.0660 - val_mae: 0.2031 - lr: 1.0000e-05 - 369ms/epoch - 4ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0044 - mse: 0.0044 - mae: 0.0513 - val_loss: 0.0659 - val_mse: 0.0659 - val_mae: 0.2030 - lr: 1.0000e-05 - 365ms/epoch - 4ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0499 - val_loss: 0.0655 - val_mse: 0.0655 - val_mae: 0.2022 - lr: 1.0000e-05 - 377ms/epoch - 4ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0491 - val_loss: 0.0653 - val_mse: 0.0653 - val_mae: 0.2020 - lr: 1.0000e-05 - 376ms/epoch - 4ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0516 - val_loss: 0.0654 - val_mse: 0.0654 - val_mae: 0.2021 - lr: 1.0000e-05 - 386ms/epoch - 4ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0043 - mse: 0.0043 - mae: 0.0513 - val_loss: 0.0649 - val_mse: 0.0649 - val_mae: 0.2011 - lr: 1.0000e-05 - 362ms/epoch - 4ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0045 - mse: 0.0045 - mae: 0.0520 - val_loss: 0.0644 - val_mse: 0.0644 - val_mae: 0.2003 - lr: 1.0000e-05 - 389ms/epoch - 4ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0506 - val_loss: 0.0638 - val_mse: 0.0638 - val_mae: 0.1990 - lr: 1.0000e-05 - 397ms/epoch - 4ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0042 - mse: 0.0042 - mae: 0.0519 - val_loss: 0.0638 - val_mse: 0.0638 - val_mae: 0.1990 - lr: 1.0000e-05 - 408ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0484 - val_loss: 0.0638 - val_mse: 0.0638 - val_mae: 0.1991 - lr: 1.0000e-05 - 412ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0495 - val_loss: 0.0640 - val_mse: 0.0640 - val_mae: 0.1995 - lr: 1.0000e-05 - 359ms/epoch - 4ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0475 - val_loss: 0.0640 - val_mse: 0.0640 - val_mae: 0.1995 - lr: 1.0000e-05 - 380ms/epoch - 4ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0489 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.1999 - lr: 1.0000e-05 - 381ms/epoch - 4ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0481 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.2001 - lr: 1.0000e-05 - 371ms/epoch - 4ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0479 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.2001 - lr: 1.0000e-05 - 380ms/epoch - 4ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0487 - val_loss: 0.0637 - val_mse: 0.0637 - val_mae: 0.1991 - lr: 1.0000e-05 - 366ms/epoch - 4ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0470 - val_loss: 0.0636 - val_mse: 0.0636 - val_mae: 0.1991 - lr: 1.0000e-05 - 366ms/epoch - 4ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0465 - val_loss: 0.0635 - val_mse: 0.0635 - val_mae: 0.1987 - lr: 1.0000e-05 - 410ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0040 - mse: 0.0040 - mae: 0.0487 - val_loss: 0.0634 - val_mse: 0.0634 - val_mae: 0.1986 - lr: 1.0000e-05 - 368ms/epoch - 4ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0480 - val_loss: 0.0634 - val_mse: 0.0634 - val_mae: 0.1986 - lr: 1.0000e-05 - 432ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0041 - mse: 0.0041 - mae: 0.0493 - val_loss: 0.0634 - val_mse: 0.0634 - val_mae: 0.1986 - lr: 1.0000e-05 - 399ms/epoch - 4ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0037 - mse: 0.0037 - mae: 0.0477 - val_loss: 0.0632 - val_mse: 0.0632 - val_mae: 0.1982 - lr: 1.0000e-05 - 374ms/epoch - 4ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.02480
90/90 - 1s - loss: 0.0042 - mse: 0.0042 - mae: 0.0501 - val_loss: 0.0631 - val_mse: 0.0631 - val_mae: 0.1981 - lr: 1.0000e-05 - 504ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0491 - val_loss: 0.0633 - val_mse: 0.0633 - val_mae: 0.1985 - lr: 1.0000e-05 - 366ms/epoch - 4ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0458 - val_loss: 0.0633 - val_mse: 0.0633 - val_mae: 0.1986 - lr: 1.0000e-05 - 459ms/epoch - 5ms/step
Epoch 52/500
Epoch 00052: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0477 - val_loss: 0.0633 - val_mse: 0.0633 - val_mae: 0.1988 - lr: 1.0000e-05 - 369ms/epoch - 4ms/step
Epoch 53/500
Epoch 00053: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0454 - val_loss: 0.0633 - val_mse: 0.0633 - val_mae: 0.1986 - lr: 1.0000e-05 - 418ms/epoch - 5ms/step
Epoch 54/500
Epoch 00054: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0034 - mse: 0.0034 - mae: 0.0454 - val_loss: 0.0632 - val_mse: 0.0632 - val_mae: 0.1985 - lr: 1.0000e-05 - 382ms/epoch - 4ms/step
Epoch 55/500
Epoch 00055: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0039 - mse: 0.0039 - mae: 0.0489 - val_loss: 0.0636 - val_mse: 0.0636 - val_mae: 0.1994 - lr: 1.0000e-05 - 456ms/epoch - 5ms/step
Epoch 56/500
Epoch 00056: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0036 - mse: 0.0036 - mae: 0.0462 - val_loss: 0.0636 - val_mse: 0.0636 - val_mae: 0.1995 - lr: 1.0000e-05 - 475ms/epoch - 5ms/step
Epoch 57/500
Epoch 00057: val_loss did not improve from 0.02480
90/90 - 0s - loss: 0.0038 - mse: 0.0038 - mae: 0.0475 - val_loss: 0.0642 - val_mse: 0.0642 - val_mae: 0.2007 - lr: 1.0000e-05 - 365ms/epoch - 4ms/step
Epoch 00057: early stopping
SMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 44.65212926265077 RMSE: 6.682224873696692 MAPE: 5.204686480071648 EMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 45.539825469272486 RMSE: 6.748320196113436 MAPE: 5.43245952292463 WMA Prediction vs Close: 52.99% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 42.30488040231578 RMSE: 6.504220199402522 MAPE: 5.010195929360332 DEMA Prediction vs Close: 55.6% Accuracy Prediction vs Prediction: 54.48% Accuracy MSE: 23.305922116020078 RMSE: 4.827620751055335 MAPE: 3.7452201197397774 KAMA Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 49.63% Accuracy MSE: 18.082341646298453 RMSE: 4.252333670621163 MAPE: 3.4333194517527637 MIDPOINT Prediction vs Close: 51.49% Accuracy Prediction vs Prediction: 50.0% Accuracy MSE: 91.59813707600279 RMSE: 9.57069156727991 MAPE: 7.718313236319782 T3 Prediction vs Close: 55.22% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 145.19295971499469 RMSE: 12.049604131049065 MAPE: 9.875885491811884 TEMA Prediction vs Close: 51.12% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 41.1158513706741 RMSE: 6.412164328109044 MAPE: 5.720374187090847 Runtime: mins: 60.462800946899975
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment7.png to Experiment7 (1).png
img = cv2.imread('Experiment7.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa65007efd0>
for i in range(len(list(simulation7.keys()))):
SIM = list(simulation7.keys())[i]
plot_train(simulation7,SIM)
plot_test(simulation7,SIM)
----- Train RMSE for SMA ----- 9.009338337956548 ----- Train_MSE_LSTM for SMA ----- 81.16817728777366 ----- Train MAE LSTM for SMA ----- 7.791349936760778
----- Test RMSE for SMA----- 6.682224873696692 ----- Test_MSE_LSTM for SMA----- 44.65212926265077 ----- Test_MAE_LSTM for SMA----- 5.204686480071648
----- Train RMSE for EMA ----- 10.75589930055345 ----- Train_MSE_LSTM for EMA ----- 115.68936976364621 ----- Train MAE LSTM for EMA ----- 9.582493831136144
----- Test RMSE for EMA----- 6.748320196113436 ----- Test_MSE_LSTM for EMA----- 45.539825469272486 ----- Test_MAE_LSTM for EMA----- 5.43245952292463
----- Train RMSE for WMA ----- 10.908509232790552 ----- Train_MSE_LSTM for WMA ----- 118.9955736818767 ----- Train MAE LSTM for WMA ----- 9.800814582394565
----- Test RMSE for WMA----- 6.504220199402522 ----- Test_MSE_LSTM for WMA----- 42.30488040231578 ----- Test_MAE_LSTM for WMA----- 5.010195929360332
----- Train RMSE for DEMA ----- 12.643440847440496 ----- Train_MSE_LSTM for DEMA ----- 159.85659646272686 ----- Train MAE LSTM for DEMA ----- 11.36962745253845
----- Test RMSE for DEMA----- 4.827620751055335 ----- Test_MSE_LSTM for DEMA----- 23.305922116020078 ----- Test_MAE_LSTM for DEMA----- 3.7452201197397774
----- Train RMSE for KAMA ----- 11.147835109393704 ----- Train_MSE_LSTM for KAMA ----- 124.27422762623091 ----- Train MAE LSTM for KAMA ----- 10.178621601225753
----- Test RMSE for KAMA----- 4.252333670621163 ----- Test_MSE_LSTM for KAMA----- 18.082341646298453 ----- Test_MAE_LSTM for KAMA----- 3.4333194517527637
----- Train RMSE for MIDPOINT ----- 9.563103791664194 ----- Train_MSE_LSTM for MIDPOINT ----- 91.45295413014207 ----- Train MAE LSTM for MIDPOINT ----- 8.50176313593246
----- Test RMSE for MIDPOINT----- 9.57069156727991 ----- Test_MSE_LSTM for MIDPOINT----- 91.59813707600279 ----- Test_MAE_LSTM for MIDPOINT----- 7.718313236319782
----- Train RMSE for T3 ----- 12.352918439245476 ----- Train_MSE_LSTM for T3 ----- 152.5945939666509 ----- Train MAE LSTM for T3 ----- 11.209855832067309
----- Test RMSE for T3----- 12.049604131049065 ----- Test_MSE_LSTM for T3----- 145.19295971499469 ----- Test_MAE_LSTM for T3----- 9.875885491811884
----- Train RMSE for TEMA ----- 7.436400272753141 ----- Train_MSE_LSTM for TEMA ----- 55.30004901660298 ----- Train MAE LSTM for TEMA ----- 5.158390050894572
----- Test RMSE for TEMA----- 6.412164328109044 ----- Test_MSE_LSTM for TEMA----- 41.1158513706741 ----- Test_MAE_LSTM for TEMA----- 5.720374187090847
def get_arima_exog(dataframe,original_data, train_len, test_len):
# prepare train and test data for exogenous vr
X_value = pd.DataFrame(low_vol.iloc[:, :])
y_value = pd.DataFrame(low_vol.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
X_scale_dataset = X_scaler.fit_transform(X_value)
y_scale_dataset = y_scaler.fit_transform(y_value)
# Get data and check shape
# X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X_scale_dataset)
y_train, y_test, = split_train_test(y_scale_dataset)
yc_train,yc_test = split_train_test(low_vol_data)
yc = yc_test.values.tolist()
y_train_list = y_train.flatten().tolist()
y_test_list = y_test.flatten().tolist()
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
# Initialize model
model = auto_arima(y_train_list,exogenous = X_train,trace=True, error_action='ignore', start_p=1,start_q=1,max_p=3,max_q=3,d=3,
suppress_warnings=True,stepwise=True,seasonal=True)
# Determine model parameters
print(model.summary())
model.fit(y_train_list,maxiter=200)
order = model.get_params()['order']
print('ARIMA order:', order, '\n')
# Genereate predictions
prediction = []
for i in range(len(y_test_list)):
model = pmdarima.ARIMA(order=order)
model.fit(y_train_list)
# print('working on', i+1, 'of', len(y_test), '-- ' + str(int(100 * (i + 1) / len(y_test))) + '% complete')
prediction.append(model.predict()[0])
y_train_list.append(y_test_list[i])
predictionte = y_scaler.inverse_transform(np.array(prediction).reshape(-1,1))
y_test_ = y_scaler.inverse_transform(np.array(y_test_list).reshape(-1,1))
# Generate error data
mse = mean_squared_error(yc_test, predictionte)
rmse = mse ** 0.5
mae = mean_absolute_error(y_test_ , predictionte )
return yc,predictionte.flatten().tolist(), mse, rmse, mae
def get_lstm(data,original_data, train_len, test_len,img_file,ma ,lstm_len=3):
# prepare train and test data
X_value = pd.DataFrame(data.iloc[:, :])
y_value = pd.DataFrame(data.iloc[:, 3])
X_scaler = MinMaxScaler(feature_range=(-1, 1))
y_scaler = MinMaxScaler(feature_range=(-1, 1))
X_scaler.fit(X_value)
y_scaler.fit(y_value)
# Get data and check shape
X, y, yc = get_X_y(X_scale_dataset, y_scale_dataset)#X will be of shape 224 X 3 X 21 (each 3 X 21 array will be 3 days' worth of data). yc will have the corresponding closing price value
# pdb.set_trace()
X_train, X_test, = split_train_test(X)
y_train, y_test, = split_train_test(y)
# yc_train, yc_test, = split_train_test(original_data)
index_train, index_test, = predict_index(dataset_final, X_train, n_steps_in, n_steps_out)
det =20
input_dim = X_train.shape[1]#3
feature_size = X_train.shape[2]#24
output_dim = y_train.shape[1]#1
# Option 1
# Set up & fit LSTM RNN
# model = Sequential()
# model.add(LSTM(256, activation='relu', kernel_initializer='he_normal', input_shape=(input_dim, feature_size)))
# model.add(Dense(units=64,activation='relu'))
# model.add(Dropout(0.5))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(learning_rate = 0.001), loss='mse')
# ## Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM1.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma_' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# # # option 2
# model = Sequential()
# model.add(Bidirectional(LSTM(units= 128), input_shape=(input_dim, feature_size)))
# model.add(Dense(64))
# model.add(Dense(units=output_dim))
# model.compile(optimizer=Adam(lr = 0.001), loss='mean_squared_error', metrics=['accuracy'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma+' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# Option 3
# define custom activation
# reference: https://github.com/Vaibhav-Sachdeva/Correlation-Coefficient-Prediction-using-ARIMA-LSTM-Hybrid-Model/blob/main/Code/LSTM-ARIMA.ipynb
# class Double_Tanh(Activation):
# def __init__(self, activation, **kwargs):
# super(Double_Tanh, self).__init__(activation, **kwargs)
# self.__name__ = 'double_tanh'
# def double_tanh(x):
# return (K.tanh(x) * 2)
# get_custom_objects().update({'double_tanh':Double_Tanh(double_tanh)})
# # Model Generation
# model = Sequential()
# #check https://machinelearningmastery.com/use-weight-regularization-lstm-networks-time-series-forecasting/
# model.add(LSTM(25, input_shape=(input_dim, feature_size), dropout=0.2, kernel_regularizer=l1_l2(0.00,0.00), bias_regularizer=l1_l2(0.00,0.00)))
# model.add(Dense(1))
# model.add(Activation(double_tanh))
# model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse', 'mae'])
# # Common code
# callbacks = [
# EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
# ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
# ModelCheckpoint('LSTM7.h5', verbose=1, save_best_only=True, save_weights_only=True)]
# fname1 = img_file+'.png'
# tensorflow.keras.utils.plot_model(
# model, to_file=fname1, show_shapes=True, show_dtype=False,
# show_layer_names=True, expand_nested=False, dpi=96,
# layer_range=None, show_layer_activations=False
# )
# history = model.fit(X_train, y_train, epochs=500, batch_size=1, verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# # plot loss
# fname2 = img_file+'-'+ma
# plt.title(img_file+'-'+ma+' Loss')
# plt.xlabel("Epochs")
# plt.ylabel("Loss")
# pyplot.plot(history.history['loss'], label='train')
# pyplot.plot(history.history['val_loss'], label='validation')
# pyplot.legend()
# pyplot.savefig(fname2+'.png',dpi='figure')
# pyplot.show()
# #Option 4
# # Set up & fit LSTM RNN
model = Sequential()
model.add(LSTM(units=lstm_len, return_sequences=True, input_shape=(input_dim, feature_size)))
model.add(LSTM(units=int(lstm_len/2)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam')
# Common code
callbacks = [
EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50),
ReduceLROnPlateau(factor=0.1, patience=5, min_lr=0.00001, verbose=1),
ModelCheckpoint('LSTM8.h5', verbose=1, save_best_only=True, save_weights_only=True)]
fname1 = img_file+'.png'
tensorflow.keras.utils.plot_model(
model, to_file=fname1, show_shapes=True, show_dtype=False,
show_layer_names=True, expand_nested=False, dpi=96,
layer_range=None, show_layer_activations=False
)
history = model.fit(X_train, y_train, epochs=500, batch_size=int( optimized_period[ma]), verbose=2, callbacks=callbacks, validation_data=(X_test, y_test),shuffle=False)
# plot loss
fname2 = img_file+'-'+ma
plt.title(img_file+'-'+ma+' Loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='validation')
pyplot.legend()
pyplot.savefig(fname2+'.png',dpi='figure')
pyplot.show()
# Generate predictions
predictiontr = model.predict(X_train, verbose=0)
predictiontr = y_scaler.inverse_transform(predictiontr).tolist()
outputtr = []
for i in range(len(predictiontr)):
outputtr.extend(predictiontr[i])
predictiontr = outputtr
# Generate error data
## replace with yc , xtest generated by new multistep method
mse_tr = mean_squared_error(y_train, predictiontr)
rmse_tr = mse_tr ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictiontr))
mae_tr = mean_absolute_error(y_train, pd.Series(predictiontr))
# Original_tr = pd.Series(yc_train)
Original_tr = y_scaler.inverse_transform(y_train).flatten().tolist()
predictionte = model.predict(X_test, verbose=0)
predictionte = (y_scaler.inverse_transform(predictionte)-det).tolist()
outputte = []
for i in range(len(predictionte)):
outputte.extend(predictionte[i])
predictionte = outputte
# Generate error data
mse_te = mean_squared_error(y_test, predictionte)
rmse_te = mse_te ** 0.5
# mape = mean_absolute_percentage_error(X_test, pd.Series(predictionte))
mae_te = mean_absolute_error(y_test, pd.Series(predictionte))
# Original_te = pd.Series(yc_test)
Original_te = y_scaler.inverse_transform(y_test).flatten().tolist()
return Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,Original_te,predictionte, mse_te,rmse_te,mae_te
if __name__ == '__main__':
start_time = timeit.default_timer()
simulation8 = {}
imgfile = 'Experiment8'
for ma in optimized_period:
print(ma)
print(functions[ma])
print ( int( optimized_period[ma]))
# if ma == 'SMA':
low_vol = df.apply(lambda c: functions[ma](c, timeperiod = int( optimized_period[ma])))
low_vol = low_vol.fillna(0)
low_vol_data = df['close']
high_vol = pd.DataFrame()
df2 = df.copy()
for i in df2.columns:
if i in low_vol.columns:
high_vol[i] = df2[i].subtract(low_vol[i], fill_value=0)
high_vol_data = df['close']
## *****************************************************
# Generate ARIMA and LSTM predictions
print('\nWorking on ' + ma + ' predictions')
try:
print('parameters used : ', train_len, test_len)
low_vol_Original, low_vol_prediction, low_vol_mse, low_vol_rmse,low_vol_mae = get_arima_exog(low_vol,low_vol_data, train_len, test_len)
except:
print('ARIMA error, skipping to next MA type')
continue
Original_tr, predictiontr, mse_tr, rmse_tr,mae_tr,high_vol_Original, high_vol_prediction, high_vol_mse, high_vol_rmse,high_vol_mae, = get_lstm(high_vol,high_vol_data, train_len, test_len,imgfile,ma)
final_prediction_tr = df['close'].head(train_len).values + pd.Series(predictiontr) # ignoring first 3 steps
mse_ftr = mean_squared_error(df['close'].head(train_len).values,final_prediction_tr.values)
rmse_ftr = mse_ftr ** 0.5
mape_ftr = mean_absolute_percentage_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
mae_ftr = mean_absolute_error(df['close'].head(train_len).reset_index(drop=True), final_prediction_tr)
final_prediction = pd.Series(low_vol_prediction[3:]) + pd.Series(high_vol_prediction)
mse = mean_squared_error(df['close'].tail(test_len).values,final_prediction.values)
rmse = mse ** 0.5
mape = mean_absolute_percentage_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
mae = mean_absolute_error(df['close'].tail(test_len).reset_index(drop=True), final_prediction)
# Generate prediction accuracy
actual = df['close'].tail(test_len).values
result_1 = []
result_2 = []
for i in range(1, len(final_prediction)):
# Compare prediction to previous close price
if final_prediction[i] > actual[i-1] and actual[i] > actual[i-1]:
result_1.append(1)
elif final_prediction[i] < actual[i-1] and actual[i] < actual[i-1]:
result_1.append(1)
else:
result_1.append(0)
# Compare prediction to previous prediction
if final_prediction[i] > final_prediction[i-1] and actual[i] > actual[i-1]:
result_2.append(1)
elif final_prediction[i] < final_prediction[i-1] and actual[i] < actual[i-1]:
result_2.append(1)
else:
result_2.append(0)
accuracy_1 = np.mean(result_1)
accuracy_2 = np.mean(result_2)
simulation8[ma] = {'low_vol': {'original':list(low_vol_Original), 'prediction': list(low_vol_prediction) , 'mse': low_vol_mse,
'rmse': low_vol_rmse, 'mae' : low_vol_mae},
'high_vol': {'original':list(high_vol_Original),'prediction': list(high_vol_prediction), 'mse': high_vol_mse,
'rmse': high_vol_rmse, 'mae' : high_vol_mae},
'final_tr': {'original':df['close'].head(train_len).tolist(),'prediction': final_prediction_tr.values.tolist(), 'mse': mse_ftr,
'rmse': rmse_ftr, 'mae' : mae_ftr},
'final': {'original': df['close'].tail(test_len).tolist(), 'prediction': final_prediction.values.tolist(), 'mse': mse,
'rmse': rmse, 'mae': mae },
'accuracy': {'prediction vs close': accuracy_1, 'prediction vs prediction': accuracy_2}}
# save simulation data here as checkpoint
with open('simulation8_data.json', 'w') as fp:
json.dump(simulation8, fp)
for ma in simulation8.keys():
print('\n' + ma)
print('Prediction vs Close:\t\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs close'], 2))
+ '% Accuracy')
print('Prediction vs Prediction:\t' + str(round(100*simulation8[ma]['accuracy']['prediction vs prediction'], 2))
+ '% Accuracy')
print('MSE:\t', simulation8[ma]['final']['mse'],
'\nRMSE:\t', simulation8[ma]['final']['rmse'],
'\nMAPE:\t', simulation8[ma]['final']['mae'])#,
# '\nMAPE:\t', simulation[ma]['final']['mape'])
# else:
# break
elapsed = timeit.default_timer() - start_time
print('Runtime: mins:',elapsed/60)
SMA
SMA([input_arrays], [timeperiod=30])
Simple Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
17
Working on SMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.787, Time=3.71 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.588, Time=5.64 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14596.280, Time=5.73 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.588, Time=8.68 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16924.805, Time=10.33 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14482.349, Time=11.41 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17215.608, Time=21.10 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.588, Time=10.18 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15570.350, Time=18.92 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11671.292, Time=28.02 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 123.736 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8639.804
Date: Sun, 12 Dec 2021 AIC -17215.608
Time: 19:15:57 BIC -17065.501
Sample: 0 HQIC -17157.961
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.057e-09 5.82e-05 -6.97e-05 1.000 -0.000 0.000
x2 -4.057e-09 5.81e-05 -6.99e-05 1.000 -0.000 0.000
x3 -4.111e-09 5.49e-05 -7.49e-05 1.000 -0.000 0.000
x4 1.0000 5.71e-05 1.75e+04 0.000 1.000 1.000
x5 -3.706e-09 5.43e-05 -6.82e-05 1.000 -0.000 0.000
x6 -1.082e-08 0.000 -6.08e-05 1.000 -0.000 0.000
x7 -4.025e-09 5.63e-05 -7.15e-05 1.000 -0.000 0.000
x8 -4.035e-09 5.19e-05 -7.78e-05 1.000 -0.000 0.000
x9 -1.522e-10 2.9e-05 -5.25e-06 1.000 -5.68e-05 5.68e-05
x10 -6.396e-10 1.04e-05 -6.15e-05 1.000 -2.04e-05 2.04e-05
x11 -3.921e-09 5.06e-05 -7.75e-05 1.000 -9.91e-05 9.91e-05
x12 -4.102e-09 5.29e-05 -7.76e-05 1.000 -0.000 0.000
x13 -4.087e-09 5.75e-05 -7.11e-05 1.000 -0.000 0.000
x14 -3.619e-08 0.000 -0.000 1.000 -0.000 0.000
x15 -4.806e-09 4.61e-05 -0.000 1.000 -9.03e-05 9.03e-05
x16 -3.507e-09 0.000 -2.98e-05 1.000 -0.000 0.000
x17 -3.121e-09 6.02e-05 -5.18e-05 1.000 -0.000 0.000
x18 -1.172e-08 0.000 -0.000 1.000 -0.000 0.000
x19 -5.433e-09 6.06e-05 -8.96e-05 1.000 -0.000 0.000
x20 -1.393e-08 4.79e-05 -0.000 1.000 -9.39e-05 9.39e-05
x21 -4.216e-09 6.63e-05 -6.36e-05 1.000 -0.000 0.000
x22 -3.479e-11 1.66e-08 -0.002 0.998 -3.25e-08 3.24e-08
x23 -9.221e-10 1.4e-07 -0.007 0.995 -2.74e-07 2.73e-07
x24 -8.085e-08 0.001 -6.96e-05 1.000 -0.002 0.002
x25 -9.642e-08 0.001 -0.000 1.000 -0.002 0.002
x26 -5.019e-08 0.000 -0.000 1.000 -0.000 0.000
x27 -2.457e-08 7.65e-05 -0.000 1.000 -0.000 0.000
x28 -3.411e-08 0.000 -0.000 1.000 -0.000 0.000
x29 -1.507e-08 4.36e-05 -0.000 1.000 -8.54e-05 8.54e-05
ma.L1 -1.3898 8.03e-07 -1.73e+06 0.000 -1.390 -1.390
ma.L2 0.4031 8.36e-07 4.82e+05 0.000 0.403 0.403
sigma2 7.528e-11 7.24e-11 1.040 0.298 -6.66e-11 2.17e-10
===================================================================================
Ljung-Box (L1) (Q): 89.12 Jarque-Bera (JB): 1533103.33
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 5.56
Prob(H) (two-sided): 0.00 Kurtosis: 216.50
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.08e+25. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04793, saving model to LSTM8.h5
48/48 - 3s - loss: 1.4063 - val_loss: 0.0479 - lr: 0.0010 - 3s/epoch - 71ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04793
48/48 - 0s - loss: 1.1570 - val_loss: 0.0499 - lr: 0.0010 - 233ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.9394 - val_loss: 0.0520 - lr: 0.0010 - 273ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.8556 - val_loss: 0.0541 - lr: 0.0010 - 243ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.8087 - val_loss: 0.0562 - lr: 0.0010 - 267ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7767 - val_loss: 0.0583 - lr: 0.0010 - 262ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7614 - val_loss: 0.0585 - lr: 1.0000e-04 - 258ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7592 - val_loss: 0.0588 - lr: 1.0000e-04 - 234ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7571 - val_loss: 0.0590 - lr: 1.0000e-04 - 266ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7549 - val_loss: 0.0592 - lr: 1.0000e-04 - 277ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7528 - val_loss: 0.0595 - lr: 1.0000e-04 - 258ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7514 - val_loss: 0.0595 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7512 - val_loss: 0.0595 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7510 - val_loss: 0.0596 - lr: 1.0000e-05 - 257ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7508 - val_loss: 0.0596 - lr: 1.0000e-05 - 268ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7505 - val_loss: 0.0596 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7503 - val_loss: 0.0597 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7501 - val_loss: 0.0597 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7498 - val_loss: 0.0597 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7496 - val_loss: 0.0597 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7494 - val_loss: 0.0598 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7491 - val_loss: 0.0598 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7489 - val_loss: 0.0598 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7486 - val_loss: 0.0599 - lr: 1.0000e-05 - 277ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7484 - val_loss: 0.0599 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7481 - val_loss: 0.0599 - lr: 1.0000e-05 - 267ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7479 - val_loss: 0.0600 - lr: 1.0000e-05 - 253ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7477 - val_loss: 0.0600 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7474 - val_loss: 0.0600 - lr: 1.0000e-05 - 259ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7472 - val_loss: 0.0601 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7469 - val_loss: 0.0601 - lr: 1.0000e-05 - 233ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7466 - val_loss: 0.0601 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7464 - val_loss: 0.0602 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7461 - val_loss: 0.0602 - lr: 1.0000e-05 - 266ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7459 - val_loss: 0.0602 - lr: 1.0000e-05 - 272ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7456 - val_loss: 0.0603 - lr: 1.0000e-05 - 255ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7454 - val_loss: 0.0603 - lr: 1.0000e-05 - 256ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7451 - val_loss: 0.0604 - lr: 1.0000e-05 - 288ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7449 - val_loss: 0.0604 - lr: 1.0000e-05 - 258ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7446 - val_loss: 0.0604 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7443 - val_loss: 0.0605 - lr: 1.0000e-05 - 323ms/epoch - 7ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7441 - val_loss: 0.0605 - lr: 1.0000e-05 - 252ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7438 - val_loss: 0.0605 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7436 - val_loss: 0.0606 - lr: 1.0000e-05 - 273ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7433 - val_loss: 0.0606 - lr: 1.0000e-05 - 238ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7431 - val_loss: 0.0607 - lr: 1.0000e-05 - 274ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7428 - val_loss: 0.0607 - lr: 1.0000e-05 - 262ms/epoch - 5ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7425 - val_loss: 0.0607 - lr: 1.0000e-05 - 296ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7423 - val_loss: 0.0608 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7420 - val_loss: 0.0608 - lr: 1.0000e-05 - 242ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04793
48/48 - 0s - loss: 0.7418 - val_loss: 0.0609 - lr: 1.0000e-05 - 246ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
EMA([input_arrays], [timeperiod=30])
Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
51
Working on EMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.50 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.43 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15952.568, Time=15.01 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=7.85 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16628.634, Time=10.96 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16462.206, Time=24.67 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16848.298, Time=13.03 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.023, Time=6.75 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.619, Time=3.61 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=7.63 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=18.64 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.994, Time=3.95 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.667, Time=4.79 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 125.840 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 19:21:56 BIC -16911.966
Sample: 0 HQIC -17010.204
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.316e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x2 -2.309e-10 6.24e-05 -3.7e-06 1.000 -0.000 0.000
x3 -2.325e-10 6.26e-05 -3.71e-06 1.000 -0.000 0.000
x4 1.0000 6.25e-05 1.6e+04 0.000 1.000 1.000
x5 -2.107e-10 5.96e-05 -3.54e-06 1.000 -0.000 0.000
x6 -7.997e-10 0.000 -7.41e-06 1.000 -0.000 0.000
x7 -2.295e-10 6.22e-05 -3.69e-06 1.000 -0.000 0.000
x8 -2.246e-10 6.15e-05 -3.65e-06 1.000 -0.000 0.000
x9 -1.167e-11 1.25e-05 -9.33e-07 1.000 -2.45e-05 2.45e-05
x10 -4.454e-11 2.66e-05 -1.68e-06 1.000 -5.21e-05 5.21e-05
x11 -2.221e-10 6.11e-05 -3.63e-06 1.000 -0.000 0.000
x12 -2.266e-10 6.18e-05 -3.66e-06 1.000 -0.000 0.000
x13 -2.315e-10 6.25e-05 -3.71e-06 1.000 -0.000 0.000
x14 -1.767e-09 0.000 -1.02e-05 1.000 -0.000 0.000
x15 -2.11e-10 5.93e-05 -3.56e-06 1.000 -0.000 0.000
x16 -5.283e-10 9.45e-05 -5.59e-06 1.000 -0.000 0.000
x17 -2.098e-10 6.01e-05 -3.49e-06 1.000 -0.000 0.000
x18 -3.82e-11 2.41e-05 -1.58e-06 1.000 -4.73e-05 4.73e-05
x19 -2.645e-10 6.61e-05 -4e-06 1.000 -0.000 0.000
x20 -2.417e-10 6.21e-05 -3.89e-06 1.000 -0.000 0.000
x21 -4.824e-10 8.83e-05 -5.46e-06 1.000 -0.000 0.000
x22 -3.758e-13 1.19e-11 -0.032 0.975 -2.36e-11 2.29e-11
x23 -1.089e-11 8.42e-11 -0.129 0.897 -1.76e-10 1.54e-10
x24 -2.538e-09 0.000 -1.44e-05 1.000 -0.000 0.000
x25 -2.038e-09 0.000 -1.49e-05 1.000 -0.000 0.000
x26 -3.16e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x27 -2.955e-09 0.000 -1.32e-05 1.000 -0.000 0.000
x28 -1.664e-09 0.000 -9.94e-06 1.000 -0.000 0.000
x29 -1.568e-09 0.000 -9.63e-06 1.000 -0.000 0.000
ar.L1 -0.4923 6.2e-10 -7.94e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 3.6e-10 -5.35e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.71e-10 -2.71e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.41e-09 -5.04e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 51.79 Jarque-Bera (JB): 4012066.18
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.44
Prob(H) (two-sided): 0.00 Kurtosis: 348.68
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 5.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05184, saving model to LSTM8.h5
16/16 - 3s - loss: 1.4618 - val_loss: 0.0518 - lr: 0.0010 - 3s/epoch - 214ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.3997 - val_loss: 0.0525 - lr: 0.0010 - 120ms/epoch - 8ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.3470 - val_loss: 0.0533 - lr: 0.0010 - 107ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.3012 - val_loss: 0.0541 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.2618 - val_loss: 0.0549 - lr: 0.0010 - 99ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.2275 - val_loss: 0.0558 - lr: 0.0010 - 85ms/epoch - 5ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.2070 - val_loss: 0.0559 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.2040 - val_loss: 0.0560 - lr: 1.0000e-04 - 94ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.2011 - val_loss: 0.0562 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1982 - val_loss: 0.0563 - lr: 1.0000e-04 - 93ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1954 - val_loss: 0.0564 - lr: 1.0000e-04 - 95ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1936 - val_loss: 0.0564 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1933 - val_loss: 0.0564 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1930 - val_loss: 0.0564 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1927 - val_loss: 0.0564 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1924 - val_loss: 0.0564 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1921 - val_loss: 0.0564 - lr: 1.0000e-05 - 93ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1919 - val_loss: 0.0565 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1916 - val_loss: 0.0565 - lr: 1.0000e-05 - 108ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1913 - val_loss: 0.0565 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1910 - val_loss: 0.0565 - lr: 1.0000e-05 - 109ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1907 - val_loss: 0.0565 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1904 - val_loss: 0.0565 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1901 - val_loss: 0.0565 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1899 - val_loss: 0.0565 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1896 - val_loss: 0.0565 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1893 - val_loss: 0.0566 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1890 - val_loss: 0.0566 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1887 - val_loss: 0.0566 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1884 - val_loss: 0.0566 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1881 - val_loss: 0.0566 - lr: 1.0000e-05 - 118ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1878 - val_loss: 0.0566 - lr: 1.0000e-05 - 124ms/epoch - 8ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1876 - val_loss: 0.0566 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1873 - val_loss: 0.0566 - lr: 1.0000e-05 - 105ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1870 - val_loss: 0.0566 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1867 - val_loss: 0.0567 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1864 - val_loss: 0.0567 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1861 - val_loss: 0.0567 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1858 - val_loss: 0.0567 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1855 - val_loss: 0.0567 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1853 - val_loss: 0.0567 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1850 - val_loss: 0.0567 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1847 - val_loss: 0.0567 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1844 - val_loss: 0.0568 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1841 - val_loss: 0.0568 - lr: 1.0000e-05 - 116ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1838 - val_loss: 0.0568 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1835 - val_loss: 0.0568 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1832 - val_loss: 0.0568 - lr: 1.0000e-05 - 91ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1830 - val_loss: 0.0568 - lr: 1.0000e-05 - 90ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1827 - val_loss: 0.0568 - lr: 1.0000e-05 - 92ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05184
16/16 - 0s - loss: 1.1824 - val_loss: 0.0568 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.37277762407691
RMSE: 5.689708043834667
MAPE: 4.4297291061987245
WMA
WMA([input_arrays], [timeperiod=30])
Weighted Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
49
Working on WMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.778, Time=3.35 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.587, Time=5.41 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-14597.576, Time=5.52 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.587, Time=7.97 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15338.693, Time=10.63 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15153.472, Time=26.39 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17112.658, Time=15.13 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.587, Time=10.63 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15106.216, Time=13.83 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-12251.715, Time=34.06 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 132.942 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8588.329
Date: Sun, 12 Dec 2021 AIC -17112.658
Time: 19:32:50 BIC -16962.551
Sample: 0 HQIC -17055.011
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -4.53e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x2 -4.512e-09 3.25e-06 -0.001 0.999 -6.38e-06 6.37e-06
x3 -4.538e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x4 1.0000 3.26e-06 3.07e+05 0.000 1.000 1.000
x5 -4.105e-09 3.11e-06 -0.001 0.999 -6.1e-06 6.09e-06
x6 -1.488e-08 5.45e-06 -0.003 0.998 -1.07e-05 1.07e-05
x7 -4.481e-09 3.24e-06 -0.001 0.999 -6.36e-06 6.36e-06
x8 -4.365e-09 3.2e-06 -0.001 0.999 -6.29e-06 6.28e-06
x9 -4.628e-10 8.38e-07 -0.001 1.000 -1.64e-06 1.64e-06
x10 -7.326e-10 1.3e-06 -0.001 1.000 -2.55e-06 2.54e-06
x11 -4.347e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x12 -4.345e-09 3.2e-06 -0.001 0.999 -6.27e-06 6.26e-06
x13 -4.52e-09 3.26e-06 -0.001 0.999 -6.39e-06 6.38e-06
x14 -3.586e-08 9e-06 -0.004 0.997 -1.77e-05 1.76e-05
x15 -3.757e-09 2.98e-06 -0.001 0.999 -5.84e-06 5.83e-06
x16 -1.24e-08 5.36e-06 -0.002 0.998 -1.05e-05 1.05e-05
x17 -4.515e-09 3.26e-06 -0.001 0.999 -6.4e-06 6.39e-06
x18 -2.632e-10 7.07e-07 -0.000 1.000 -1.39e-06 1.39e-06
x19 -4.642e-09 3.3e-06 -0.001 0.999 -6.47e-06 6.46e-06
x20 -3.919e-10 6.91e-07 -0.001 1.000 -1.36e-06 1.35e-06
x21 -7.69e-09 4.13e-06 -0.002 0.999 -8.11e-06 8.09e-06
x22 -6.998e-12 2.69e-13 -25.970 0.000 -7.53e-12 -6.47e-12
x23 -1.81e-10 2.22e-12 -81.582 0.000 -1.85e-10 -1.77e-10
x24 -4.955e-08 8.9e-06 -0.006 0.996 -1.75e-05 1.74e-05
x25 -4.901e-08 8.4e-06 -0.006 0.995 -1.65e-05 1.64e-05
x26 -6.446e-08 1.2e-05 -0.005 0.996 -2.37e-05 2.35e-05
x27 -5.73e-08 1.14e-05 -0.005 0.996 -2.24e-05 2.23e-05
x28 -2.997e-08 8.22e-06 -0.004 0.997 -1.61e-05 1.61e-05
x29 -3.486e-08 8.89e-06 -0.004 0.997 -1.75e-05 1.74e-05
ma.L1 -1.3902 3.62e-10 -3.84e+09 0.000 -1.390 -1.390
ma.L2 0.4033 3.72e-10 1.08e+09 0.000 0.403 0.403
sigma2 8.541e-11 6.95e-11 1.229 0.219 -5.08e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 66.92 Jarque-Bera (JB): 6039240.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 12.14
Prob(H) (two-sided): 0.00 Kurtosis: 426.63
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 4.94e+30. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04179, saving model to LSTM8.h5
17/17 - 4s - loss: 1.3396 - val_loss: 0.0418 - lr: 0.0010 - 4s/epoch - 219ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.2689 - val_loss: 0.0421 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.2104 - val_loss: 0.0427 - lr: 0.0010 - 104ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.1615 - val_loss: 0.0435 - lr: 0.0010 - 101ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.1191 - val_loss: 0.0444 - lr: 0.0010 - 120ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0813 - val_loss: 0.0454 - lr: 0.0010 - 115ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0585 - val_loss: 0.0455 - lr: 1.0000e-04 - 99ms/epoch - 6ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0552 - val_loss: 0.0456 - lr: 1.0000e-04 - 110ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0519 - val_loss: 0.0458 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0488 - val_loss: 0.0459 - lr: 1.0000e-04 - 105ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0456 - val_loss: 0.0460 - lr: 1.0000e-04 - 96ms/epoch - 6ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0436 - val_loss: 0.0460 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0432 - val_loss: 0.0460 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0429 - val_loss: 0.0460 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0426 - val_loss: 0.0460 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0423 - val_loss: 0.0461 - lr: 1.0000e-05 - 97ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0420 - val_loss: 0.0461 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0417 - val_loss: 0.0461 - lr: 1.0000e-05 - 109ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0414 - val_loss: 0.0461 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0410 - val_loss: 0.0461 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0407 - val_loss: 0.0461 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0404 - val_loss: 0.0461 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0401 - val_loss: 0.0462 - lr: 1.0000e-05 - 149ms/epoch - 9ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0398 - val_loss: 0.0462 - lr: 1.0000e-05 - 119ms/epoch - 7ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0395 - val_loss: 0.0462 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0391 - val_loss: 0.0462 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0388 - val_loss: 0.0462 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0385 - val_loss: 0.0462 - lr: 1.0000e-05 - 96ms/epoch - 6ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0382 - val_loss: 0.0462 - lr: 1.0000e-05 - 90ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0379 - val_loss: 0.0463 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0376 - val_loss: 0.0463 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0372 - val_loss: 0.0463 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0369 - val_loss: 0.0463 - lr: 1.0000e-05 - 100ms/epoch - 6ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0366 - val_loss: 0.0463 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0363 - val_loss: 0.0463 - lr: 1.0000e-05 - 101ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0360 - val_loss: 0.0464 - lr: 1.0000e-05 - 107ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0357 - val_loss: 0.0464 - lr: 1.0000e-05 - 106ms/epoch - 6ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0353 - val_loss: 0.0464 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0350 - val_loss: 0.0464 - lr: 1.0000e-05 - 94ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0347 - val_loss: 0.0464 - lr: 1.0000e-05 - 95ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0344 - val_loss: 0.0464 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0341 - val_loss: 0.0465 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0337 - val_loss: 0.0465 - lr: 1.0000e-05 - 105ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0334 - val_loss: 0.0465 - lr: 1.0000e-05 - 112ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0331 - val_loss: 0.0465 - lr: 1.0000e-05 - 98ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0328 - val_loss: 0.0465 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0325 - val_loss: 0.0465 - lr: 1.0000e-05 - 103ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0322 - val_loss: 0.0466 - lr: 1.0000e-05 - 102ms/epoch - 6ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0318 - val_loss: 0.0466 - lr: 1.0000e-05 - 89ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0315 - val_loss: 0.0466 - lr: 1.0000e-05 - 99ms/epoch - 6ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04179
17/17 - 0s - loss: 1.0312 - val_loss: 0.0466 - lr: 1.0000e-05 - 104ms/epoch - 6ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.37277762407691
RMSE: 5.689708043834667
MAPE: 4.4297291061987245
WMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 58.067607397948805
RMSE: 7.620210456276704
MAPE: 6.244282675104111
DEMA
DEMA([input_arrays], [timeperiod=30])
Double Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
89
Working on DEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.776, Time=3.39 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.586, Time=5.37 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16271.755, Time=7.19 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.586, Time=8.20 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15152.908, Time=10.72 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14481.105, Time=13.34 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16088.109, Time=21.31 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-17014.021, Time=7.29 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.615, Time=3.39 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-17071.454, Time=7.26 sec
ARIMA(3,3,2)(0,0,0)[0] : AIC=inf, Time=17.72 sec
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/sarimax.py:1906: RuntimeWarning: divide by zero encountered in reciprocal return np.roots(self.polynomial_reduced_ma)**-1
ARIMA(2,3,2)(0,0,0)[0] : AIC=-16987.981, Time=4.26 sec
ARIMA(3,3,1)(0,0,0)[0] intercept : AIC=-16982.666, Time=4.59 sec
Best model: ARIMA(3,3,1)(0,0,0)[0]
Total fit time: 114.053 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(3, 3, 1) Log Likelihood 8569.727
Date: Sun, 12 Dec 2021 AIC -17071.454
Time: 19:39:03 BIC -16911.965
Sample: 0 HQIC -17010.203
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -2.8e-10 6.02e-05 -4.65e-06 1.000 -0.000 0.000
x2 -2.817e-10 6.04e-05 -4.66e-06 1.000 -0.000 0.000
x3 -2.805e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x4 1.0000 6.03e-05 1.66e+04 0.000 1.000 1.000
x5 -2.6e-10 5.8e-05 -4.48e-06 1.000 -0.000 0.000
x6 -1.389e-09 0.000 -1.08e-05 1.000 -0.000 0.000
x7 -2.789e-10 6.01e-05 -4.64e-06 1.000 -0.000 0.000
x8 -2.763e-10 5.99e-05 -4.62e-06 1.000 -0.000 0.000
x9 -2.224e-12 1.6e-06 -1.39e-06 1.000 -3.13e-06 3.13e-06
x10 -1.345e-10 4.12e-05 -3.26e-06 1.000 -8.08e-05 8.08e-05
x11 -2.9e-10 6.12e-05 -4.74e-06 1.000 -0.000 0.000
x12 -2.602e-10 5.82e-05 -4.47e-06 1.000 -0.000 0.000
x13 -2.807e-10 6.03e-05 -4.65e-06 1.000 -0.000 0.000
x14 -1.87e-09 0.000 -1.2e-05 1.000 -0.000 0.000
x15 -2.844e-10 6.05e-05 -4.7e-06 1.000 -0.000 0.000
x16 -7.962e-11 3.2e-05 -2.48e-06 1.000 -6.28e-05 6.28e-05
x17 -2.445e-10 5.61e-05 -4.36e-06 1.000 -0.000 0.000
x18 -6.4e-10 9.15e-05 -6.99e-06 1.000 -0.000 0.000
x19 -2.923e-10 6.14e-05 -4.76e-06 1.000 -0.000 0.000
x20 -4.336e-10 7.41e-05 -5.86e-06 1.000 -0.000 0.000
x21 -4.55e-10 7.5e-05 -6.07e-06 1.000 -0.000 0.000
x22 -3.587e-13 1.42e-11 -0.025 0.980 -2.82e-11 2.75e-11
x23 -1.088e-11 9.56e-11 -0.114 0.909 -1.98e-10 1.76e-10
x24 -2.146e-09 0.000 -1.63e-05 1.000 -0.000 0.000
x25 -1.637e-09 0.000 -1.35e-05 1.000 -0.000 0.000
x26 -3.147e-09 0.000 -1.56e-05 1.000 -0.000 0.000
x27 -2.58e-09 0.000 -1.41e-05 1.000 -0.000 0.000
x28 -2.444e-09 0.000 -1.37e-05 1.000 -0.000 0.000
x29 -1.666e-09 0.000 -1.13e-05 1.000 -0.000 0.000
ar.L1 -0.4923 5.1e-10 -9.65e+08 0.000 -0.492 -0.492
ar.L2 -0.1923 2.96e-10 -6.49e+08 0.000 -0.192 -0.192
ar.L3 -0.0462 1.4e-10 -3.29e+08 0.000 -0.046 -0.046
ma.L1 -0.7077 1.16e-09 -6.12e+08 0.000 -0.708 -0.708
sigma2 8.99e-11 6.96e-11 1.291 0.197 -4.66e-11 2.26e-10
===================================================================================
Ljung-Box (L1) (Q): 54.06 Jarque-Bera (JB): 4126495.58
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: 5.48
Prob(H) (two-sided): 0.00 Kurtosis: 353.58
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 2.01e+30. Standard errors may be unstable.
ARIMA order: (3, 3, 1)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04315, saving model to LSTM8.h5
10/10 - 3s - loss: 1.3932 - val_loss: 0.0432 - lr: 0.0010 - 3s/epoch - 341ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.3430 - val_loss: 0.0433 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.2911 - val_loss: 0.0435 - lr: 0.0010 - 69ms/epoch - 7ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.2357 - val_loss: 0.0440 - lr: 0.0010 - 77ms/epoch - 8ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.1792 - val_loss: 0.0446 - lr: 0.0010 - 72ms/epoch - 7ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.1259 - val_loss: 0.0452 - lr: 0.0010 - 68ms/epoch - 7ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0918 - val_loss: 0.0453 - lr: 1.0000e-04 - 66ms/epoch - 7ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0874 - val_loss: 0.0454 - lr: 1.0000e-04 - 76ms/epoch - 8ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0832 - val_loss: 0.0455 - lr: 1.0000e-04 - 81ms/epoch - 8ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0791 - val_loss: 0.0455 - lr: 1.0000e-04 - 66ms/epoch - 7ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0751 - val_loss: 0.0456 - lr: 1.0000e-04 - 72ms/epoch - 7ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0723 - val_loss: 0.0456 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0719 - val_loss: 0.0456 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0715 - val_loss: 0.0456 - lr: 1.0000e-05 - 87ms/epoch - 9ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0711 - val_loss: 0.0456 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0708 - val_loss: 0.0457 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0704 - val_loss: 0.0457 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0700 - val_loss: 0.0457 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0696 - val_loss: 0.0457 - lr: 1.0000e-05 - 70ms/epoch - 7ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0692 - val_loss: 0.0457 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0688 - val_loss: 0.0457 - lr: 1.0000e-05 - 75ms/epoch - 7ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0684 - val_loss: 0.0457 - lr: 1.0000e-05 - 75ms/epoch - 8ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0680 - val_loss: 0.0457 - lr: 1.0000e-05 - 85ms/epoch - 8ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0677 - val_loss: 0.0457 - lr: 1.0000e-05 - 77ms/epoch - 8ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0673 - val_loss: 0.0457 - lr: 1.0000e-05 - 88ms/epoch - 9ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0669 - val_loss: 0.0457 - lr: 1.0000e-05 - 72ms/epoch - 7ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0665 - val_loss: 0.0457 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0661 - val_loss: 0.0458 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0657 - val_loss: 0.0458 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0653 - val_loss: 0.0458 - lr: 1.0000e-05 - 74ms/epoch - 7ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0649 - val_loss: 0.0458 - lr: 1.0000e-05 - 66ms/epoch - 7ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0646 - val_loss: 0.0458 - lr: 1.0000e-05 - 65ms/epoch - 7ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0642 - val_loss: 0.0458 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0638 - val_loss: 0.0458 - lr: 1.0000e-05 - 67ms/epoch - 7ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0634 - val_loss: 0.0458 - lr: 1.0000e-05 - 93ms/epoch - 9ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0630 - val_loss: 0.0458 - lr: 1.0000e-05 - 90ms/epoch - 9ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0626 - val_loss: 0.0458 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0623 - val_loss: 0.0458 - lr: 1.0000e-05 - 83ms/epoch - 8ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0619 - val_loss: 0.0458 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0615 - val_loss: 0.0459 - lr: 1.0000e-05 - 80ms/epoch - 8ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0611 - val_loss: 0.0459 - lr: 1.0000e-05 - 64ms/epoch - 6ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0607 - val_loss: 0.0459 - lr: 1.0000e-05 - 76ms/epoch - 8ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0603 - val_loss: 0.0459 - lr: 1.0000e-05 - 89ms/epoch - 9ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0600 - val_loss: 0.0459 - lr: 1.0000e-05 - 71ms/epoch - 7ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0596 - val_loss: 0.0459 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0592 - val_loss: 0.0459 - lr: 1.0000e-05 - 68ms/epoch - 7ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0588 - val_loss: 0.0459 - lr: 1.0000e-05 - 73ms/epoch - 7ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0585 - val_loss: 0.0459 - lr: 1.0000e-05 - 69ms/epoch - 7ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0581 - val_loss: 0.0459 - lr: 1.0000e-05 - 78ms/epoch - 8ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0577 - val_loss: 0.0459 - lr: 1.0000e-05 - 81ms/epoch - 8ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04315
10/10 - 0s - loss: 1.0573 - val_loss: 0.0459 - lr: 1.0000e-05 - 94ms/epoch - 9ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.37277762407691
RMSE: 5.689708043834667
MAPE: 4.4297291061987245
WMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 58.067607397948805
RMSE: 7.620210456276704
MAPE: 6.244282675104111
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 166.4719121939062
RMSE: 12.902399474280209
MAPE: 11.649540302125361
KAMA
KAMA([input_arrays], [timeperiod=30])
Kaufman Adaptive Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
18
Working on KAMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.104, Time=3.73 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.591, Time=5.52 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16779.655, Time=10.82 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.590, Time=8.55 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16989.430, Time=3.57 sec
ARIMA(2,3,0)(0,0,0)[0] : AIC=-16990.286, Time=3.65 sec
ARIMA(3,3,0)(0,0,0)[0] : AIC=-16988.543, Time=4.36 sec
ARIMA(3,3,1)(0,0,0)[0] : AIC=-16987.154, Time=4.11 sec
ARIMA(2,3,0)(0,0,0)[0] intercept : AIC=-16533.935, Time=16.02 sec
Best model: ARIMA(2,3,0)(0,0,0)[0]
Total fit time: 60.350 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(2, 3, 0) Log Likelihood 8527.143
Date: Sun, 12 Dec 2021 AIC -16990.286
Time: 19:48:55 BIC -16840.179
Sample: 0 HQIC -16932.639
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.1e-16 nan nan nan nan nan
x2 -3.811e-16 -0 inf 0.000 -3.81e-16 -3.81e-16
x3 8.776e-16 4.38e-27 2e+11 0.000 8.78e-16 8.78e-16
x4 1.0000 4.36e-27 2.29e+26 0.000 1.000 1.000
x5 6.686e-16 4.14e-27 1.61e+11 0.000 6.69e-16 6.69e-16
x6 -5.238e-17 9.44e-27 -5.55e+09 0.000 -5.24e-17 -5.24e-17
x7 -1.709e-16 4.37e-27 -3.91e+10 0.000 -1.71e-16 -1.71e-16
x8 1.439e-15 4.33e-27 3.32e+11 0.000 1.44e-15 1.44e-15
x9 -2.924e-16 5.73e-28 -5.1e+11 0.000 -2.92e-16 -2.92e-16
x10 -1.028e-16 1.78e-27 -5.76e+10 0.000 -1.03e-16 -1.03e-16
x11 -4.338e-16 4.31e-27 -1.01e+11 0.000 -4.34e-16 -4.34e-16
x12 1.72e-16 4.33e-27 3.97e+10 0.000 1.72e-16 1.72e-16
x13 -3.011e-16 4.36e-27 -6.91e+10 0.000 -3.01e-16 -3.01e-16
x14 -2.611e-16 1.27e-26 -2.06e+10 0.000 -2.61e-16 -2.61e-16
x15 1.53e-14 4.46e-27 3.43e+12 0.000 1.53e-14 1.53e-14
x16 -1.401e-14 5.45e-27 -2.57e+12 0.000 -1.4e-14 -1.4e-14
x17 2.316e-14 4.12e-27 5.62e+12 0.000 2.32e-14 2.32e-14
x18 -3.727e-15 3.71e-27 -1.01e+12 0.000 -3.73e-15 -3.73e-15
x19 -1.361e-14 4.94e-27 -2.75e+12 0.000 -1.36e-14 -1.36e-14
x20 -5.277e-15 6.08e-27 -8.68e+11 0.000 -5.28e-15 -5.28e-15
x21 1.178e-18 3.12e-27 3.77e+08 0.000 1.18e-18 1.18e-18
x22 -8.779e-17 1.74e-29 -5.05e+12 0.000 -8.78e-17 -8.78e-17
x23 3.183e-17 5.91e-29 5.39e+11 0.000 3.18e-17 3.18e-17
x24 -1.683e-16 1.41e-26 -1.19e+10 0.000 -1.68e-16 -1.68e-16
x25 8.988e-17 1.48e-30 6.08e+13 0.000 8.99e-17 8.99e-17
x26 4.435e-17 1.58e-26 2.8e+09 0.000 4.44e-17 4.44e-17
x27 1.538e-16 8.87e-27 1.73e+10 0.000 1.54e-16 1.54e-16
x28 1.635e-16 1.22e-26 1.34e+10 0.000 1.63e-16 1.63e-16
x29 1.474e-16 6.34e-27 2.33e+10 0.000 1.47e-16 1.47e-16
ar.L1 -0.9879 1.21e-22 -8.16e+21 0.000 -0.988 -0.988
ar.L2 -0.4879 1.29e-22 -3.79e+21 0.000 -0.488 -0.488
sigma2 1e-10 6.99e-11 1.432 0.152 -3.69e-11 2.37e-10
===================================================================================
Ljung-Box (L1) (Q): 57.29 Jarque-Bera (JB): 559955.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.13 Skew: 0.64
Prob(H) (two-sided): 0.00 Kurtosis: 132.20
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number inf. Standard errors may be unstable.
/usr/local/lib/python3.7/dist-packages/statsmodels/tsa/statespace/mlemodel.py:2968: RuntimeWarning: divide by zero encountered in true_divide return self.params / self.bse
ARIMA order: (2, 3, 0) Epoch 1/500 Epoch 00001: val_loss improved from inf to 0.05134, saving model to LSTM8.h5 45/45 - 4s - loss: 1.4205 - val_loss: 0.0513 - lr: 0.0010 - 4s/epoch - 80ms/step Epoch 2/500 Epoch 00002: val_loss did not improve from 0.05134 45/45 - 0s - loss: 1.3224 - val_loss: 0.0543 - lr: 0.0010 - 239ms/epoch - 5ms/step Epoch 3/500 Epoch 00003: val_loss did not improve from 0.05134 45/45 - 0s - loss: 1.2059 - val_loss: 0.0581 - lr: 0.0010 - 231ms/epoch - 5ms/step Epoch 4/500 Epoch 00004: val_loss did not improve from 0.05134 45/45 - 0s - loss: 1.0884 - val_loss: 0.0630 - lr: 0.0010 - 254ms/epoch - 6ms/step Epoch 5/500 Epoch 00005: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.9954 - val_loss: 0.0687 - lr: 0.0010 - 258ms/epoch - 6ms/step Epoch 6/500 Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513. Epoch 00006: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.9266 - val_loss: 0.0749 - lr: 0.0010 - 251ms/epoch - 6ms/step Epoch 7/500 Epoch 00007: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8908 - val_loss: 0.0755 - lr: 1.0000e-04 - 258ms/epoch - 6ms/step Epoch 8/500 Epoch 00008: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8858 - val_loss: 0.0761 - lr: 1.0000e-04 - 258ms/epoch - 6ms/step Epoch 9/500 Epoch 00009: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8809 - val_loss: 0.0768 - lr: 1.0000e-04 - 229ms/epoch - 5ms/step Epoch 10/500 Epoch 00010: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8762 - val_loss: 0.0775 - lr: 1.0000e-04 - 244ms/epoch - 5ms/step Epoch 11/500 Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05. Epoch 00011: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8715 - val_loss: 0.0782 - lr: 1.0000e-04 - 257ms/epoch - 6ms/step Epoch 12/500 Epoch 00012: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8687 - val_loss: 0.0782 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step Epoch 13/500 Epoch 00013: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8682 - val_loss: 0.0783 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 14/500 Epoch 00014: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8677 - val_loss: 0.0784 - lr: 1.0000e-05 - 264ms/epoch - 6ms/step Epoch 15/500 Epoch 00015: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8673 - val_loss: 0.0785 - lr: 1.0000e-05 - 240ms/epoch - 5ms/step Epoch 16/500 Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 00016: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8668 - val_loss: 0.0785 - lr: 1.0000e-05 - 271ms/epoch - 6ms/step Epoch 17/500 Epoch 00017: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8664 - val_loss: 0.0786 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 18/500 Epoch 00018: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8659 - val_loss: 0.0787 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step Epoch 19/500 Epoch 00019: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8654 - val_loss: 0.0788 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 20/500 Epoch 00020: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8650 - val_loss: 0.0789 - lr: 1.0000e-05 - 287ms/epoch - 6ms/step Epoch 21/500 Epoch 00021: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8645 - val_loss: 0.0789 - lr: 1.0000e-05 - 250ms/epoch - 6ms/step Epoch 22/500 Epoch 00022: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8640 - val_loss: 0.0790 - lr: 1.0000e-05 - 269ms/epoch - 6ms/step Epoch 23/500 Epoch 00023: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8636 - val_loss: 0.0791 - lr: 1.0000e-05 - 221ms/epoch - 5ms/step Epoch 24/500 Epoch 00024: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8631 - val_loss: 0.0792 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step Epoch 25/500 Epoch 00025: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8626 - val_loss: 0.0793 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step Epoch 26/500 Epoch 00026: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8622 - val_loss: 0.0794 - lr: 1.0000e-05 - 248ms/epoch - 6ms/step Epoch 27/500 Epoch 00027: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8617 - val_loss: 0.0795 - lr: 1.0000e-05 - 235ms/epoch - 5ms/step Epoch 28/500 Epoch 00028: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8612 - val_loss: 0.0796 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step Epoch 29/500 Epoch 00029: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8608 - val_loss: 0.0796 - lr: 1.0000e-05 - 262ms/epoch - 6ms/step Epoch 30/500 Epoch 00030: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8603 - val_loss: 0.0797 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step Epoch 31/500 Epoch 00031: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8598 - val_loss: 0.0798 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step Epoch 32/500 Epoch 00032: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8594 - val_loss: 0.0799 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 33/500 Epoch 00033: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8589 - val_loss: 0.0800 - lr: 1.0000e-05 - 231ms/epoch - 5ms/step Epoch 34/500 Epoch 00034: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8584 - val_loss: 0.0801 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step Epoch 35/500 Epoch 00035: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8580 - val_loss: 0.0802 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step Epoch 36/500 Epoch 00036: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8575 - val_loss: 0.0803 - lr: 1.0000e-05 - 228ms/epoch - 5ms/step Epoch 37/500 Epoch 00037: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8570 - val_loss: 0.0804 - lr: 1.0000e-05 - 225ms/epoch - 5ms/step Epoch 38/500 Epoch 00038: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8565 - val_loss: 0.0805 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step Epoch 39/500 Epoch 00039: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8561 - val_loss: 0.0806 - lr: 1.0000e-05 - 245ms/epoch - 5ms/step Epoch 40/500 Epoch 00040: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8556 - val_loss: 0.0807 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 41/500 Epoch 00041: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8551 - val_loss: 0.0808 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step Epoch 42/500 Epoch 00042: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8547 - val_loss: 0.0809 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step Epoch 43/500 Epoch 00043: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8542 - val_loss: 0.0810 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step Epoch 44/500 Epoch 00044: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8537 - val_loss: 0.0811 - lr: 1.0000e-05 - 243ms/epoch - 5ms/step Epoch 45/500 Epoch 00045: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8533 - val_loss: 0.0812 - lr: 1.0000e-05 - 270ms/epoch - 6ms/step Epoch 46/500 Epoch 00046: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8528 - val_loss: 0.0813 - lr: 1.0000e-05 - 254ms/epoch - 6ms/step Epoch 47/500 Epoch 00047: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8523 - val_loss: 0.0814 - lr: 1.0000e-05 - 278ms/epoch - 6ms/step Epoch 48/500 Epoch 00048: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8519 - val_loss: 0.0815 - lr: 1.0000e-05 - 265ms/epoch - 6ms/step Epoch 49/500 Epoch 00049: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8514 - val_loss: 0.0816 - lr: 1.0000e-05 - 252ms/epoch - 6ms/step Epoch 50/500 Epoch 00050: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8509 - val_loss: 0.0817 - lr: 1.0000e-05 - 253ms/epoch - 6ms/step Epoch 51/500 Epoch 00051: val_loss did not improve from 0.05134 45/45 - 0s - loss: 0.8505 - val_loss: 0.0818 - lr: 1.0000e-05 - 261ms/epoch - 6ms/step Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.37277762407691
RMSE: 5.689708043834667
MAPE: 4.4297291061987245
WMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 58.067607397948805
RMSE: 7.620210456276704
MAPE: 6.244282675104111
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 166.4719121939062
RMSE: 12.902399474280209
MAPE: 11.649540302125361
KAMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 17.81489427298047
RMSE: 4.220769393485087
MAPE: 3.4008273908825086
MIDPOINT
MIDPOINT([input_arrays], [timeperiod=14])
MidPoint over period (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 14
Outputs:
real
14
Working on MIDPOINT predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16989.238, Time=3.61 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14558.578, Time=5.27 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16746.296, Time=8.10 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14556.578, Time=8.17 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16987.591, Time=3.72 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-16395.520, Time=13.13 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17063.555, Time=12.18 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14552.578, Time=10.48 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16082.554, Time=18.75 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-15249.608, Time=18.60 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 102.023 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8563.778
Date: Sun, 12 Dec 2021 AIC -17063.555
Time: 19:52:22 BIC -16913.448
Sample: 0 HQIC -17005.908
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.495e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x2 -1.485e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x3 -1.518e-10 0.000 -1.21e-06 1.000 -0.000 0.000
x4 1.0000 0.000 8075.329 0.000 1.000 1.000
x5 -1.356e-10 0.000 -1.15e-06 1.000 -0.000 0.000
x6 -2.861e-09 0.000 -2.38e-05 1.000 -0.000 0.000
x7 -1.374e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x8 -1.371e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x9 -7.133e-11 7.1e-06 -1.01e-05 1.000 -1.39e-05 1.39e-05
x10 -1.23e-10 4.21e-05 -2.92e-06 1.000 -8.24e-05 8.24e-05
x11 -1.357e-10 0.000 -1.1e-06 1.000 -0.000 0.000
x12 -1.401e-10 0.000 -1.11e-06 1.000 -0.000 0.000
x13 -1.436e-10 0.000 -1.16e-06 1.000 -0.000 0.000
x14 -1.179e-09 0.000 -3.22e-06 1.000 -0.001 0.001
x15 -1.651e-10 0.000 -1.2e-06 1.000 -0.000 0.000
x16 -1.064e-10 0.000 -9.62e-07 1.000 -0.000 0.000
x17 -1.041e-10 0.000 -9.53e-07 1.000 -0.000 0.000
x18 -4.477e-10 0.000 -1.99e-06 1.000 -0.000 0.000
x19 -1.816e-10 0.000 -1.26e-06 1.000 -0.000 0.000
x20 -4.37e-10 0.000 -1.96e-06 1.000 -0.000 0.000
x21 -1.371e-09 9.1e-05 -1.51e-05 1.000 -0.000 0.000
x22 -1.059e-11 nan nan nan nan nan
x23 -9.902e-11 3.83e-09 -0.026 0.979 -7.61e-09 7.41e-09
x24 -5.521e-09 0.000 -1.34e-05 1.000 -0.001 0.001
x25 -4.621e-09 6.42e-05 -7.2e-05 1.000 -0.000 0.000
x26 -1.587e-09 0.000 -3.73e-06 1.000 -0.001 0.001
x27 -8.504e-10 0.000 -2.79e-06 1.000 -0.001 0.001
x28 -1.122e-09 0.000 -3.14e-06 1.000 -0.001 0.001
x29 -6.091e-10 0.000 -2.45e-06 1.000 -0.000 0.000
ma.L1 -1.3318 7.32e-07 -1.82e+06 0.000 -1.332 -1.332
ma.L2 0.3767 7.56e-07 4.98e+05 0.000 0.377 0.377
sigma2 9.093e-11 6.97e-11 1.304 0.192 -4.57e-11 2.28e-10
===================================================================================
Ljung-Box (L1) (Q): 76.00 Jarque-Bera (JB): 304933.46
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.03 Skew: 1.65
Prob(H) (two-sided): 0.00 Kurtosis: 98.29
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 1.19e+28. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05076, saving model to LSTM8.h5
58/58 - 4s - loss: 1.4164 - val_loss: 0.0508 - lr: 0.0010 - 4s/epoch - 69ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.3574 - val_loss: 0.0538 - lr: 0.0010 - 314ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.2772 - val_loss: 0.0576 - lr: 0.0010 - 280ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.2063 - val_loss: 0.0623 - lr: 0.0010 - 285ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.1449 - val_loss: 0.0679 - lr: 0.0010 - 320ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0907 - val_loss: 0.0743 - lr: 0.0010 - 324ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0598 - val_loss: 0.0749 - lr: 1.0000e-04 - 284ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0551 - val_loss: 0.0756 - lr: 1.0000e-04 - 305ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0505 - val_loss: 0.0763 - lr: 1.0000e-04 - 315ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0458 - val_loss: 0.0770 - lr: 1.0000e-04 - 282ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0413 - val_loss: 0.0777 - lr: 1.0000e-04 - 280ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0384 - val_loss: 0.0777 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0379 - val_loss: 0.0778 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0375 - val_loss: 0.0779 - lr: 1.0000e-05 - 277ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0370 - val_loss: 0.0780 - lr: 1.0000e-05 - 318ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0366 - val_loss: 0.0780 - lr: 1.0000e-05 - 288ms/epoch - 5ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0361 - val_loss: 0.0781 - lr: 1.0000e-05 - 296ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0356 - val_loss: 0.0782 - lr: 1.0000e-05 - 278ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0352 - val_loss: 0.0783 - lr: 1.0000e-05 - 300ms/epoch - 5ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0347 - val_loss: 0.0783 - lr: 1.0000e-05 - 303ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0342 - val_loss: 0.0784 - lr: 1.0000e-05 - 294ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0338 - val_loss: 0.0785 - lr: 1.0000e-05 - 326ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0333 - val_loss: 0.0786 - lr: 1.0000e-05 - 311ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0329 - val_loss: 0.0787 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0324 - val_loss: 0.0787 - lr: 1.0000e-05 - 297ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0319 - val_loss: 0.0788 - lr: 1.0000e-05 - 312ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0315 - val_loss: 0.0789 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0310 - val_loss: 0.0790 - lr: 1.0000e-05 - 285ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0305 - val_loss: 0.0791 - lr: 1.0000e-05 - 353ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0301 - val_loss: 0.0791 - lr: 1.0000e-05 - 307ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0296 - val_loss: 0.0792 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0291 - val_loss: 0.0793 - lr: 1.0000e-05 - 289ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0287 - val_loss: 0.0794 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0282 - val_loss: 0.0795 - lr: 1.0000e-05 - 304ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0277 - val_loss: 0.0796 - lr: 1.0000e-05 - 319ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0273 - val_loss: 0.0797 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0268 - val_loss: 0.0797 - lr: 1.0000e-05 - 298ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0263 - val_loss: 0.0798 - lr: 1.0000e-05 - 320ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0259 - val_loss: 0.0799 - lr: 1.0000e-05 - 344ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0254 - val_loss: 0.0800 - lr: 1.0000e-05 - 301ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0249 - val_loss: 0.0801 - lr: 1.0000e-05 - 309ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0245 - val_loss: 0.0802 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0240 - val_loss: 0.0803 - lr: 1.0000e-05 - 280ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0235 - val_loss: 0.0803 - lr: 1.0000e-05 - 292ms/epoch - 5ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0231 - val_loss: 0.0804 - lr: 1.0000e-05 - 324ms/epoch - 6ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0226 - val_loss: 0.0805 - lr: 1.0000e-05 - 295ms/epoch - 5ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0221 - val_loss: 0.0806 - lr: 1.0000e-05 - 323ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0217 - val_loss: 0.0807 - lr: 1.0000e-05 - 284ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0212 - val_loss: 0.0808 - lr: 1.0000e-05 - 317ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0208 - val_loss: 0.0809 - lr: 1.0000e-05 - 287ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05076
58/58 - 0s - loss: 1.0203 - val_loss: 0.0810 - lr: 1.0000e-05 - 314ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.37277762407691
RMSE: 5.689708043834667
MAPE: 4.4297291061987245
WMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 58.067607397948805
RMSE: 7.620210456276704
MAPE: 6.244282675104111
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 166.4719121939062
RMSE: 12.902399474280209
MAPE: 11.649540302125361
KAMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 17.81489427298047
RMSE: 4.220769393485087
MAPE: 3.4008273908825086
MIDPOINT
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 18.523766068694844
RMSE: 4.303924496165662
MAPE: 3.4879205441290337
T3
T3([input_arrays], [timeperiod=5], [vfactor=0.7])
Triple Exponential Moving Average (T3) (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 5
vfactor: 0.7
Outputs:
real
19
Working on T3 predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16837.838, Time=3.58 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-14497.319, Time=3.91 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-16084.348, Time=6.66 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-15317.920, Time=11.11 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-15304.480, Time=11.43 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-15949.053, Time=12.62 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-17059.707, Time=11.70 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-15313.920, Time=14.42 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-16054.952, Time=13.12 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-11445.350, Time=32.83 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 121.412 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8561.853
Date: Sun, 12 Dec 2021 AIC -17059.707
Time: 19:58:39 BIC -16909.600
Sample: 0 HQIC -17002.059
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 -1.003e-07 7.69e-05 -0.001 0.999 -0.000 0.000
x2 -1.001e-07 7.44e-05 -0.001 0.999 -0.000 0.000
x3 -1.006e-07 7.84e-05 -0.001 0.999 -0.000 0.000
x4 1.0000 7.11e-05 1.41e+04 0.000 1.000 1.000
x5 -9.611e-08 6.77e-05 -0.001 0.999 -0.000 0.000
x6 -1.249e-07 4.06e-05 -0.003 0.998 -7.96e-05 7.94e-05
x7 -1e-07 7.89e-05 -0.001 0.999 -0.000 0.000
x8 -0.0002 9.43e-05 -1.838 0.066 -0.000 1.15e-05
x9 2.853e-08 9.89e-05 0.000 1.000 -0.000 0.000
x10 -4.022e-05 0.000 -0.200 0.842 -0.000 0.000
x11 0.0003 7e-05 4.122 0.000 0.000 0.000
x12 7.55e-05 0.000 0.633 0.527 -0.000 0.000
x13 -1.005e-07 7.29e-05 -0.001 0.999 -0.000 0.000
x14 -2.756e-07 0.000 -0.001 0.999 -0.000 0.000
x15 -8.419e-08 8.98e-05 -0.001 0.999 -0.000 0.000
x16 -2.171e-07 0.000 -0.001 0.999 -0.000 0.000
x17 -1.105e-07 9.93e-05 -0.001 0.999 -0.000 0.000
x18 1.263e-07 3.22e-05 0.004 0.997 -6.31e-05 6.33e-05
x19 -8.769e-08 0.000 -0.001 0.999 -0.000 0.000
x20 -5.772e-08 0.000 -0.000 1.000 -0.000 0.000
x21 -9.77e-08 0.000 -0.001 1.000 -0.000 0.000
x22 -3.686e-12 7.09e-07 -5.2e-06 1.000 -1.39e-06 1.39e-06
x23 -9.216e-12 2.4e-05 -3.83e-07 1.000 -4.71e-05 4.71e-05
x24 -3.648e-07 0.000 -0.001 0.999 -0.001 0.001
x25 -1.391e-07 0.001 -0.000 1.000 -0.002 0.002
x26 -3.142e-07 0.000 -0.001 0.999 -0.001 0.001
x27 -3.042e-07 5.47e-05 -0.006 0.996 -0.000 0.000
x28 -1.785e-07 0.000 -0.001 0.999 -0.000 0.000
x29 -1.909e-07 0.000 -0.001 1.000 -0.001 0.001
ma.L1 -1.3901 8.24e-06 -1.69e+05 0.000 -1.390 -1.390
ma.L2 0.4035 2.01e-05 2.01e+04 0.000 0.403 0.404
sigma2 7.538e-11 6.94e-11 1.085 0.278 -6.07e-11 2.11e-10
===================================================================================
Ljung-Box (L1) (Q): 69.36 Jarque-Bera (JB): 6470073.86
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.00 Skew: -12.55
Prob(H) (two-sided): 0.00 Kurtosis: 441.48
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 3.58e+22. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.05379, saving model to LSTM8.h5
43/43 - 3s - loss: 1.3977 - val_loss: 0.0538 - lr: 0.0010 - 3s/epoch - 80ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.3404 - val_loss: 0.0575 - lr: 0.0010 - 248ms/epoch - 6ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.2883 - val_loss: 0.0623 - lr: 0.0010 - 252ms/epoch - 6ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.2305 - val_loss: 0.0687 - lr: 0.0010 - 240ms/epoch - 6ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.1712 - val_loss: 0.0768 - lr: 0.0010 - 247ms/epoch - 6ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.1137 - val_loss: 0.0860 - lr: 0.0010 - 243ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0791 - val_loss: 0.0870 - lr: 1.0000e-04 - 224ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0739 - val_loss: 0.0879 - lr: 1.0000e-04 - 254ms/epoch - 6ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0689 - val_loss: 0.0889 - lr: 1.0000e-04 - 217ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0639 - val_loss: 0.0898 - lr: 1.0000e-04 - 255ms/epoch - 6ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0589 - val_loss: 0.0908 - lr: 1.0000e-04 - 224ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0558 - val_loss: 0.0909 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0553 - val_loss: 0.0910 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0548 - val_loss: 0.0911 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0544 - val_loss: 0.0912 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0539 - val_loss: 0.0913 - lr: 1.0000e-05 - 256ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0534 - val_loss: 0.0914 - lr: 1.0000e-05 - 222ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0529 - val_loss: 0.0915 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0524 - val_loss: 0.0916 - lr: 1.0000e-05 - 244ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0519 - val_loss: 0.0917 - lr: 1.0000e-05 - 219ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0514 - val_loss: 0.0918 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0509 - val_loss: 0.0919 - lr: 1.0000e-05 - 241ms/epoch - 6ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0504 - val_loss: 0.0920 - lr: 1.0000e-05 - 234ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0500 - val_loss: 0.0921 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0495 - val_loss: 0.0922 - lr: 1.0000e-05 - 246ms/epoch - 6ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0490 - val_loss: 0.0923 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0485 - val_loss: 0.0923 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0480 - val_loss: 0.0924 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0475 - val_loss: 0.0925 - lr: 1.0000e-05 - 232ms/epoch - 5ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0470 - val_loss: 0.0926 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0465 - val_loss: 0.0927 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0460 - val_loss: 0.0928 - lr: 1.0000e-05 - 213ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0455 - val_loss: 0.0929 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0450 - val_loss: 0.0930 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0445 - val_loss: 0.0931 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0440 - val_loss: 0.0932 - lr: 1.0000e-05 - 288ms/epoch - 7ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0435 - val_loss: 0.0933 - lr: 1.0000e-05 - 218ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0430 - val_loss: 0.0934 - lr: 1.0000e-05 - 249ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0426 - val_loss: 0.0935 - lr: 1.0000e-05 - 251ms/epoch - 6ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0421 - val_loss: 0.0936 - lr: 1.0000e-05 - 216ms/epoch - 5ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0416 - val_loss: 0.0937 - lr: 1.0000e-05 - 224ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0411 - val_loss: 0.0938 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0406 - val_loss: 0.0939 - lr: 1.0000e-05 - 239ms/epoch - 6ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0401 - val_loss: 0.0940 - lr: 1.0000e-05 - 238ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0396 - val_loss: 0.0941 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0391 - val_loss: 0.0942 - lr: 1.0000e-05 - 242ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0386 - val_loss: 0.0943 - lr: 1.0000e-05 - 247ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0381 - val_loss: 0.0944 - lr: 1.0000e-05 - 227ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0376 - val_loss: 0.0945 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0371 - val_loss: 0.0946 - lr: 1.0000e-05 - 217ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.05379
43/43 - 0s - loss: 1.0366 - val_loss: 0.0947 - lr: 1.0000e-05 - 220ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 49.63% Accuracy
MSE: 30.79397335917816
RMSE: 5.549231780992587
MAPE: 4.345848876898189
EMA
Prediction vs Close: 55.22% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 32.37277762407691
RMSE: 5.689708043834667
MAPE: 4.4297291061987245
WMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 47.01% Accuracy
MSE: 58.067607397948805
RMSE: 7.620210456276704
MAPE: 6.244282675104111
DEMA
Prediction vs Close: 53.36% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 166.4719121939062
RMSE: 12.902399474280209
MAPE: 11.649540302125361
KAMA
Prediction vs Close: 54.1% Accuracy
Prediction vs Prediction: 50.75% Accuracy
MSE: 17.81489427298047
RMSE: 4.220769393485087
MAPE: 3.4008273908825086
MIDPOINT
Prediction vs Close: 50.0% Accuracy
Prediction vs Prediction: 48.13% Accuracy
MSE: 18.523766068694844
RMSE: 4.303924496165662
MAPE: 3.4879205441290337
T3
Prediction vs Close: 52.24% Accuracy
Prediction vs Prediction: 44.4% Accuracy
MSE: 51.75272426881254
RMSE: 7.193936632248893
MAPE: 5.759673530885367
TEMA
TEMA([input_arrays], [timeperiod=30])
Triple Exponential Moving Average (Overlap Studies)
Inputs:
price: (any ndarray)
Parameters:
timeperiod: 30
Outputs:
real
9
Working on TEMA predictions
parameters used : 808 269
Performing stepwise search to minimize aic
ARIMA(1,3,1)(0,0,0)[0] : AIC=-16736.686, Time=3.37 sec
ARIMA(0,3,0)(0,0,0)[0] : AIC=-15327.143, Time=3.35 sec
ARIMA(1,3,0)(0,0,0)[0] : AIC=-15166.078, Time=7.26 sec
ARIMA(0,3,1)(0,0,0)[0] : AIC=-14962.662, Time=14.08 sec
ARIMA(2,3,1)(0,0,0)[0] : AIC=-16731.606, Time=5.24 sec
ARIMA(1,3,2)(0,0,0)[0] : AIC=-14848.952, Time=9.82 sec
ARIMA(0,3,2)(0,0,0)[0] : AIC=-16921.745, Time=6.38 sec
ARIMA(0,3,3)(0,0,0)[0] : AIC=-14958.662, Time=17.57 sec
ARIMA(1,3,3)(0,0,0)[0] : AIC=-15003.046, Time=13.49 sec
ARIMA(0,3,2)(0,0,0)[0] intercept : AIC=-16752.122, Time=3.84 sec
Best model: ARIMA(0,3,2)(0,0,0)[0]
Total fit time: 84.411 seconds
SARIMAX Results
==============================================================================
Dep. Variable: y No. Observations: 808
Model: SARIMAX(0, 3, 2) Log Likelihood 8492.873
Date: Sun, 12 Dec 2021 AIC -16921.745
Time: 20:04:23 BIC -16771.638
Sample: 0 HQIC -16864.098
- 808
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
x1 2.277e-08 0.001 3.25e-05 1.000 -0.001 0.001
x2 2.286e-08 0.001 2.5e-05 1.000 -0.002 0.002
x3 2.286e-08 0.001 3.44e-05 1.000 -0.001 0.001
x4 1.0000 0.000 3190.279 0.000 0.999 1.001
x5 2.174e-08 0.001 4.21e-05 1.000 -0.001 0.001
x6 6.124e-09 3.05e-05 0.000 1.000 -5.97e-05 5.97e-05
x7 2.246e-08 0.001 1.67e-05 1.000 -0.003 0.003
x8 -0.0013 0.001 -1.669 0.095 -0.003 0.000
x9 -5.239e-09 0.000 -1.79e-05 1.000 -0.001 0.001
x10 0.0001 9.9e-05 1.396 0.163 -5.59e-05 0.000
x11 -0.0001 0.001 -0.177 0.859 -0.002 0.001
x12 0.0012 0.001 1.426 0.154 -0.000 0.003
x13 2.284e-08 0.000 6.75e-05 1.000 -0.001 0.001
x14 6.258e-08 0.001 5.07e-05 1.000 -0.002 0.002
x15 2.215e-08 0.000 0.000 1.000 -0.000 0.000
x16 3.243e-08 0.000 0.000 1.000 -0.001 0.001
x17 2.22e-08 0.000 0.000 1.000 -0.000 0.000
x18 7.527e-09 0.000 1.67e-05 1.000 -0.001 0.001
x19 2.477e-08 0.000 0.000 1.000 -0.000 0.000
x20 -2.348e-08 0.000 -5.78e-05 1.000 -0.001 0.001
x21 2.718e-08 5.8e-05 0.000 1.000 -0.000 0.000
x22 -2.176e-10 0.000 -5.27e-07 1.000 -0.001 0.001
x23 -2.69e-09 8.49e-05 -3.17e-05 1.000 -0.000 0.000
x24 -4.516e-08 7.24e-06 -0.006 0.995 -1.42e-05 1.41e-05
x25 -4.213e-08 2.81e-05 -0.002 0.999 -5.51e-05 5.5e-05
x26 7.946e-08 0.001 0.000 1.000 -0.001 0.001
x27 4.528e-08 0.001 6.21e-05 1.000 -0.001 0.001
x28 5.92e-08 0.001 4.12e-05 1.000 -0.003 0.003
x29 3.468e-08 0.000 7.06e-05 1.000 -0.001 0.001
ma.L1 -1.3739 4.46e-06 -3.08e+05 0.000 -1.374 -1.374
ma.L2 0.3968 1.4e-05 2.84e+04 0.000 0.397 0.397
sigma2 7.701e-11 7.39e-11 1.043 0.297 -6.78e-11 2.22e-10
===================================================================================
Ljung-Box (L1) (Q): 61.47 Jarque-Bera (JB): 5565463.09
Prob(Q): 0.00 Prob(JB): 0.00
Heteroskedasticity (H): 0.01 Skew: 10.97
Prob(H) (two-sided): 0.00 Kurtosis: 409.75
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
[2] Covariance matrix is singular or near-singular, with condition number 6.67e+20. Standard errors may be unstable.
ARIMA order: (0, 3, 2)
Epoch 1/500
Epoch 00001: val_loss improved from inf to 0.04760, saving model to LSTM8.h5
90/90 - 4s - loss: 1.3235 - val_loss: 0.0476 - lr: 0.0010 - 4s/epoch - 42ms/step
Epoch 2/500
Epoch 00002: val_loss did not improve from 0.04760
90/90 - 0s - loss: 1.1133 - val_loss: 0.0536 - lr: 0.0010 - 422ms/epoch - 5ms/step
Epoch 3/500
Epoch 00003: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.9187 - val_loss: 0.0610 - lr: 0.0010 - 420ms/epoch - 5ms/step
Epoch 4/500
Epoch 00004: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.8016 - val_loss: 0.0697 - lr: 0.0010 - 463ms/epoch - 5ms/step
Epoch 5/500
Epoch 00005: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.7373 - val_loss: 0.0774 - lr: 0.0010 - 435ms/epoch - 5ms/step
Epoch 6/500
Epoch 00006: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Epoch 00006: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6979 - val_loss: 0.0828 - lr: 0.0010 - 539ms/epoch - 6ms/step
Epoch 7/500
Epoch 00007: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6798 - val_loss: 0.0832 - lr: 1.0000e-04 - 429ms/epoch - 5ms/step
Epoch 8/500
Epoch 00008: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6773 - val_loss: 0.0835 - lr: 1.0000e-04 - 450ms/epoch - 5ms/step
Epoch 9/500
Epoch 00009: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6747 - val_loss: 0.0839 - lr: 1.0000e-04 - 453ms/epoch - 5ms/step
Epoch 10/500
Epoch 00010: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6720 - val_loss: 0.0842 - lr: 1.0000e-04 - 472ms/epoch - 5ms/step
Epoch 11/500
Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
Epoch 00011: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6694 - val_loss: 0.0845 - lr: 1.0000e-04 - 455ms/epoch - 5ms/step
Epoch 12/500
Epoch 00012: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6678 - val_loss: 0.0845 - lr: 1.0000e-05 - 522ms/epoch - 6ms/step
Epoch 13/500
Epoch 00013: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6675 - val_loss: 0.0845 - lr: 1.0000e-05 - 478ms/epoch - 5ms/step
Epoch 14/500
Epoch 00014: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6673 - val_loss: 0.0846 - lr: 1.0000e-05 - 512ms/epoch - 6ms/step
Epoch 15/500
Epoch 00015: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6670 - val_loss: 0.0846 - lr: 1.0000e-05 - 443ms/epoch - 5ms/step
Epoch 16/500
Epoch 00016: ReduceLROnPlateau reducing learning rate to 1e-05.
Epoch 00016: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6667 - val_loss: 0.0846 - lr: 1.0000e-05 - 511ms/epoch - 6ms/step
Epoch 17/500
Epoch 00017: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6664 - val_loss: 0.0847 - lr: 1.0000e-05 - 425ms/epoch - 5ms/step
Epoch 18/500
Epoch 00018: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6661 - val_loss: 0.0847 - lr: 1.0000e-05 - 515ms/epoch - 6ms/step
Epoch 19/500
Epoch 00019: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6658 - val_loss: 0.0847 - lr: 1.0000e-05 - 541ms/epoch - 6ms/step
Epoch 20/500
Epoch 00020: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6655 - val_loss: 0.0847 - lr: 1.0000e-05 - 443ms/epoch - 5ms/step
Epoch 21/500
Epoch 00021: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6652 - val_loss: 0.0848 - lr: 1.0000e-05 - 437ms/epoch - 5ms/step
Epoch 22/500
Epoch 00022: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6649 - val_loss: 0.0848 - lr: 1.0000e-05 - 441ms/epoch - 5ms/step
Epoch 23/500
Epoch 00023: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6645 - val_loss: 0.0848 - lr: 1.0000e-05 - 440ms/epoch - 5ms/step
Epoch 24/500
Epoch 00024: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6642 - val_loss: 0.0849 - lr: 1.0000e-05 - 437ms/epoch - 5ms/step
Epoch 25/500
Epoch 00025: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6639 - val_loss: 0.0849 - lr: 1.0000e-05 - 444ms/epoch - 5ms/step
Epoch 26/500
Epoch 00026: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6636 - val_loss: 0.0849 - lr: 1.0000e-05 - 429ms/epoch - 5ms/step
Epoch 27/500
Epoch 00027: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6632 - val_loss: 0.0849 - lr: 1.0000e-05 - 486ms/epoch - 5ms/step
Epoch 28/500
Epoch 00028: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6629 - val_loss: 0.0850 - lr: 1.0000e-05 - 423ms/epoch - 5ms/step
Epoch 29/500
Epoch 00029: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6625 - val_loss: 0.0850 - lr: 1.0000e-05 - 508ms/epoch - 6ms/step
Epoch 30/500
Epoch 00030: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6622 - val_loss: 0.0850 - lr: 1.0000e-05 - 429ms/epoch - 5ms/step
Epoch 31/500
Epoch 00031: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6619 - val_loss: 0.0850 - lr: 1.0000e-05 - 517ms/epoch - 6ms/step
Epoch 32/500
Epoch 00032: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6615 - val_loss: 0.0851 - lr: 1.0000e-05 - 433ms/epoch - 5ms/step
Epoch 33/500
Epoch 00033: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6612 - val_loss: 0.0851 - lr: 1.0000e-05 - 409ms/epoch - 5ms/step
Epoch 34/500
Epoch 00034: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6608 - val_loss: 0.0851 - lr: 1.0000e-05 - 435ms/epoch - 5ms/step
Epoch 35/500
Epoch 00035: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6605 - val_loss: 0.0851 - lr: 1.0000e-05 - 430ms/epoch - 5ms/step
Epoch 36/500
Epoch 00036: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6601 - val_loss: 0.0852 - lr: 1.0000e-05 - 532ms/epoch - 6ms/step
Epoch 37/500
Epoch 00037: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6598 - val_loss: 0.0852 - lr: 1.0000e-05 - 441ms/epoch - 5ms/step
Epoch 38/500
Epoch 00038: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6594 - val_loss: 0.0852 - lr: 1.0000e-05 - 504ms/epoch - 6ms/step
Epoch 39/500
Epoch 00039: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6591 - val_loss: 0.0852 - lr: 1.0000e-05 - 431ms/epoch - 5ms/step
Epoch 40/500
Epoch 00040: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6588 - val_loss: 0.0853 - lr: 1.0000e-05 - 519ms/epoch - 6ms/step
Epoch 41/500
Epoch 00041: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6584 - val_loss: 0.0853 - lr: 1.0000e-05 - 416ms/epoch - 5ms/step
Epoch 42/500
Epoch 00042: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6581 - val_loss: 0.0853 - lr: 1.0000e-05 - 443ms/epoch - 5ms/step
Epoch 43/500
Epoch 00043: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6577 - val_loss: 0.0853 - lr: 1.0000e-05 - 430ms/epoch - 5ms/step
Epoch 44/500
Epoch 00044: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6574 - val_loss: 0.0853 - lr: 1.0000e-05 - 505ms/epoch - 6ms/step
Epoch 45/500
Epoch 00045: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6570 - val_loss: 0.0854 - lr: 1.0000e-05 - 446ms/epoch - 5ms/step
Epoch 46/500
Epoch 00046: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6567 - val_loss: 0.0854 - lr: 1.0000e-05 - 503ms/epoch - 6ms/step
Epoch 47/500
Epoch 00047: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6563 - val_loss: 0.0854 - lr: 1.0000e-05 - 522ms/epoch - 6ms/step
Epoch 48/500
Epoch 00048: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6560 - val_loss: 0.0854 - lr: 1.0000e-05 - 485ms/epoch - 5ms/step
Epoch 49/500
Epoch 00049: val_loss did not improve from 0.04760
90/90 - 1s - loss: 0.6556 - val_loss: 0.0855 - lr: 1.0000e-05 - 533ms/epoch - 6ms/step
Epoch 50/500
Epoch 00050: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6553 - val_loss: 0.0855 - lr: 1.0000e-05 - 440ms/epoch - 5ms/step
Epoch 51/500
Epoch 00051: val_loss did not improve from 0.04760
90/90 - 0s - loss: 0.6549 - val_loss: 0.0855 - lr: 1.0000e-05 - 435ms/epoch - 5ms/step
Epoch 00051: early stopping
SMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 49.63% Accuracy MSE: 30.79397335917816 RMSE: 5.549231780992587 MAPE: 4.345848876898189 EMA Prediction vs Close: 55.22% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 32.37277762407691 RMSE: 5.689708043834667 MAPE: 4.4297291061987245 WMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 47.01% Accuracy MSE: 58.067607397948805 RMSE: 7.620210456276704 MAPE: 6.244282675104111 DEMA Prediction vs Close: 53.36% Accuracy Prediction vs Prediction: 44.4% Accuracy MSE: 166.4719121939062 RMSE: 12.902399474280209 MAPE: 11.649540302125361 KAMA Prediction vs Close: 54.1% Accuracy Prediction vs Prediction: 50.75% Accuracy MSE: 17.81489427298047 RMSE: 4.220769393485087 MAPE: 3.4008273908825086 MIDPOINT Prediction vs Close: 50.0% Accuracy Prediction vs Prediction: 48.13% Accuracy MSE: 18.523766068694844 RMSE: 4.303924496165662 MAPE: 3.4879205441290337 T3 Prediction vs Close: 52.24% Accuracy Prediction vs Prediction: 44.4% Accuracy MSE: 51.75272426881254 RMSE: 7.193936632248893 MAPE: 5.759673530885367 TEMA Prediction vs Close: 51.49% Accuracy Prediction vs Prediction: 49.25% Accuracy MSE: 28.424875173467463 RMSE: 5.331498398524327 MAPE: 4.66633698560039 Runtime: mins: 55.4519855402333
from google.colab import files
import cv2
uploaded = files.upload()
Saving Experiment8.png to Experiment8 (1).png
img = cv2.imread('Experiment8.png')
plt.figure(figsize=(20,10))
plt.axis("off")
plt.title('LSTM Architecture '+imgfile,fontsize=18)
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fa5ce50e910>
for i in range(len(list(simulation8.keys()))):
SIM = list(simulation8.keys())[i]
plot_train(simulation8,SIM)
plot_test(simulation8,SIM)
----- Train RMSE for SMA ----- 16.984837129738686 ----- Train_MSE_LSTM for SMA ----- 288.48469232374987 ----- Train MAE LSTM for SMA ----- 16.91967470811145
----- Test RMSE for SMA----- 5.549231780992587 ----- Test_MSE_LSTM for SMA----- 30.79397335917816 ----- Test_MAE_LSTM for SMA----- 4.345848876898189
----- Train RMSE for EMA ----- 24.064453268878548 ----- Train_MSE_LSTM for EMA ----- 579.0979111300394 ----- Train MAE LSTM for EMA ----- 24.05741292651337
----- Test RMSE for EMA----- 5.689708043834667 ----- Test_MSE_LSTM for EMA----- 32.37277762407691 ----- Test_MAE_LSTM for EMA----- 4.4297291061987245
----- Train RMSE for WMA ----- 21.99024849687679 ----- Train_MSE_LSTM for WMA ----- 483.571028954392 ----- Train MAE LSTM for WMA ----- 21.95148728153493
----- Test RMSE for WMA----- 7.620210456276704 ----- Test_MSE_LSTM for WMA----- 58.067607397948805 ----- Test_MAE_LSTM for WMA----- 6.244282675104111
----- Train RMSE for DEMA ----- 24.365605462542323 ----- Train_MSE_LSTM for DEMA ----- 593.6827295562723 ----- Train MAE LSTM for DEMA ----- 24.332061871443646
----- Test RMSE for DEMA----- 12.902399474280209 ----- Test_MSE_LSTM for DEMA----- 166.4719121939062 ----- Test_MAE_LSTM for DEMA----- 11.649540302125361
----- Train RMSE for KAMA ----- 17.353875298772294 ----- Train_MSE_LSTM for KAMA ----- 301.1569878853392 ----- Train MAE LSTM for KAMA ----- 17.33646524542629
----- Test RMSE for KAMA----- 4.220769393485087 ----- Test_MSE_LSTM for KAMA----- 17.81489427298047 ----- Test_MAE_LSTM for KAMA----- 3.4008273908825086
----- Train RMSE for MIDPOINT ----- 19.91769524640393 ----- Train_MSE_LSTM for MIDPOINT ----- 396.7145839286217 ----- Train MAE LSTM for MIDPOINT ----- 19.915788006074358
----- Test RMSE for MIDPOINT----- 4.303924496165662 ----- Test_MSE_LSTM for MIDPOINT----- 18.523766068694844 ----- Test_MAE_LSTM for MIDPOINT----- 3.4879205441290337
----- Train RMSE for T3 ----- 22.915612404801223 ----- Train_MSE_LSTM for T3 ----- 525.1252918870797 ----- Train MAE LSTM for T3 ----- 22.913270100508587
----- Test RMSE for T3----- 7.193936632248893 ----- Test_MSE_LSTM for T3----- 51.75272426881254 ----- Test_MAE_LSTM for T3----- 5.759673530885367
----- Train RMSE for TEMA ----- 18.607130117898784 ----- Train_MSE_LSTM for TEMA ----- 346.22529122441597 ----- Train MAE LSTM for TEMA ----- 18.587996395507663
----- Test RMSE for TEMA----- 5.331498398524327 ----- Test_MSE_LSTM for TEMA----- 28.424875173467463 ----- Test_MAE_LSTM for TEMA----- 4.66633698560039
import json
with open('simulation1_data.json') as json_file:
simulation1 = json.load(json_file)
with open('simulation2_data.json') as json_file:
simulation2 = json.load(json_file)
with open('simulation3_data.json') as json_file:
simulation3 = json.load(json_file)
with open('simulation4_data.json') as json_file:
simulation4 = json.load(json_file)
with open('simulation5_data.json') as json_file:
simulation5 = json.load(json_file)
with open('simulation6_data.json') as json_file:
simulation6 = json.load(json_file)
with open('simulation7_data.json') as json_file:
simulation7 = json.load(json_file)
with open('simulation8_data.json') as json_file:
simulation8 = json.load(json_file)
text = 'Stock with Full dataset'
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
for ma in simulation.keys():
print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Full datasetExperiment 1 for MA : SMA the MSE is: 29.509403666146007 Stock with Full datasetExperiment 1 for MA : SMA the RMSE is: 5.432255854260365 Stock with Full datasetExperiment 1 for MA : SMA the MAE is: 4.5288133477558885 Stock with Full datasetExperiment 1 for MA : EMA the MSE is: 28.603272766421263 Stock with Full datasetExperiment 1 for MA : EMA the RMSE is: 5.348202760406646 Stock with Full datasetExperiment 1 for MA : EMA the MAE is: 4.3952252144553965 Stock with Full datasetExperiment 1 for MA : WMA the MSE is: 80.72598349686672 Stock with Full datasetExperiment 1 for MA : WMA the RMSE is: 8.9847639644493 Stock with Full datasetExperiment 1 for MA : WMA the MAE is: 7.266216353433966 Stock with Full datasetExperiment 1 for MA : DEMA the MSE is: 74.8946292448382 Stock with Full datasetExperiment 1 for MA : DEMA the RMSE is: 8.654168316183721 Stock with Full datasetExperiment 1 for MA : DEMA the MAE is: 7.175854729849037 Stock with Full datasetExperiment 1 for MA : KAMA the MSE is: 23.77011386693893 Stock with Full datasetExperiment 1 for MA : KAMA the RMSE is: 4.87546037487117 Stock with Full datasetExperiment 1 for MA : KAMA the MAE is: 3.900500517739451 Stock with Full datasetExperiment 1 for MA : MIDPOINT the MSE is: 53.557107319764015 Stock with Full datasetExperiment 1 for MA : MIDPOINT the RMSE is: 7.318272153983071 Stock with Full datasetExperiment 1 for MA : MIDPOINT the MAE is: 6.3365268769325365 Stock with Full datasetExperiment 1 for MA : T3 the MSE is: 44.09609147416104 Stock with Full datasetExperiment 1 for MA : T3 the RMSE is: 6.640488797834165 Stock with Full datasetExperiment 1 for MA : T3 the MAE is: 5.406095596816415 Stock with Full datasetExperiment 1 for MA : TEMA the MSE is: 9.564405293897392 Stock with Full datasetExperiment 1 for MA : TEMA the RMSE is: 3.0926372716336124 Stock with Full datasetExperiment 1 for MA : TEMA the MAE is: 2.44888799215368 Stock with Full datasetExperiment 2 for MA : SMA the MSE is: 75.20110458138421 Stock with Full datasetExperiment 2 for MA : SMA the RMSE is: 8.67185704341257 Stock with Full datasetExperiment 2 for MA : SMA the MAE is: 7.0799160587584336 Stock with Full datasetExperiment 2 for MA : EMA the MSE is: 61.82384712230415 Stock with Full datasetExperiment 2 for MA : EMA the RMSE is: 7.862814198638052 Stock with Full datasetExperiment 2 for MA : EMA the MAE is: 6.504666247736678 Stock with Full datasetExperiment 2 for MA : WMA the MSE is: 78.06346997131263 Stock with Full datasetExperiment 2 for MA : WMA the RMSE is: 8.835353415190172 Stock with Full datasetExperiment 2 for MA : WMA the MAE is: 6.948265794170055 Stock with Full datasetExperiment 2 for MA : DEMA the MSE is: 153.59400995187858 Stock with Full datasetExperiment 2 for MA : DEMA the RMSE is: 12.39330504554288 Stock with Full datasetExperiment 2 for MA : DEMA the MAE is: 11.203775482220726 Stock with Full datasetExperiment 2 for MA : KAMA the MSE is: 121.28941541171922 Stock with Full datasetExperiment 2 for MA : KAMA the RMSE is: 11.013147388994629 Stock with Full datasetExperiment 2 for MA : KAMA the MAE is: 9.175643045864026 Stock with Full datasetExperiment 2 for MA : MIDPOINT the MSE is: 110.09412594018622 Stock with Full datasetExperiment 2 for MA : MIDPOINT the RMSE is: 10.492574800314088 Stock with Full datasetExperiment 2 for MA : MIDPOINT the MAE is: 8.796456301428389 Stock with Full datasetExperiment 2 for MA : T3 the MSE is: 225.73907153615718 Stock with Full datasetExperiment 2 for MA : T3 the RMSE is: 15.024615520410403 Stock with Full datasetExperiment 2 for MA : T3 the MAE is: 12.611725131734374 Stock with Full datasetExperiment 2 for MA : TEMA the MSE is: 157.04980120863047 Stock with Full datasetExperiment 2 for MA : TEMA the RMSE is: 12.531951213144364 Stock with Full datasetExperiment 2 for MA : TEMA the MAE is: 11.294114614846999 Stock with Full datasetExperiment 3 for MA : SMA the MSE is: 32.725767438505336 Stock with Full datasetExperiment 3 for MA : SMA the RMSE is: 5.7206439706125165 Stock with Full datasetExperiment 3 for MA : SMA the MAE is: 4.798603095387009 Stock with Full datasetExperiment 3 for MA : EMA the MSE is: 143.9522591181831 Stock with Full datasetExperiment 3 for MA : EMA the RMSE is: 11.998010631691534 Stock with Full datasetExperiment 3 for MA : EMA the MAE is: 10.07848404711658 Stock with Full datasetExperiment 3 for MA : WMA the MSE is: 24.586224407987817 Stock with Full datasetExperiment 3 for MA : WMA the RMSE is: 4.958449798877449 Stock with Full datasetExperiment 3 for MA : WMA the MAE is: 3.970226889097132 Stock with Full datasetExperiment 3 for MA : DEMA the MSE is: 207.2547601932076 Stock with Full datasetExperiment 3 for MA : DEMA the RMSE is: 14.3963453762824 Stock with Full datasetExperiment 3 for MA : DEMA the MAE is: 12.894635987621164 Stock with Full datasetExperiment 3 for MA : KAMA the MSE is: 23.743754657069395 Stock with Full datasetExperiment 3 for MA : KAMA the RMSE is: 4.872756371610364 Stock with Full datasetExperiment 3 for MA : KAMA the MAE is: 3.7850733762502107 Stock with Full datasetExperiment 3 for MA : MIDPOINT the MSE is: 35.62442093531873 Stock with Full datasetExperiment 3 for MA : MIDPOINT the RMSE is: 5.968619684258559 Stock with Full datasetExperiment 3 for MA : MIDPOINT the MAE is: 5.0490603478808165 Stock with Full datasetExperiment 3 for MA : T3 the MSE is: 103.73535640918065 Stock with Full datasetExperiment 3 for MA : T3 the RMSE is: 10.185055542763655 Stock with Full datasetExperiment 3 for MA : T3 the MAE is: 8.016244139827235 Stock with Full datasetExperiment 3 for MA : TEMA the MSE is: 39.894741171517865 Stock with Full datasetExperiment 3 for MA : TEMA the RMSE is: 6.316228397668807 Stock with Full datasetExperiment 3 for MA : TEMA the MAE is: 5.481705479796751 Stock with Full datasetExperiment 4 for MA : SMA the MSE is: 19.776724587061057 Stock with Full datasetExperiment 4 for MA : SMA the RMSE is: 4.447102943159856 Stock with Full datasetExperiment 4 for MA : SMA the MAE is: 3.587879520041786 Stock with Full datasetExperiment 4 for MA : EMA the MSE is: 31.621751516368622 Stock with Full datasetExperiment 4 for MA : EMA the RMSE is: 5.623322106759368 Stock with Full datasetExperiment 4 for MA : EMA the MAE is: 4.355106062590965 Stock with Full datasetExperiment 4 for MA : WMA the MSE is: 52.4753296205182 Stock with Full datasetExperiment 4 for MA : WMA the RMSE is: 7.2439857551294375 Stock with Full datasetExperiment 4 for MA : WMA the MAE is: 5.852253139584933 Stock with Full datasetExperiment 4 for MA : DEMA the MSE is: 146.44755629127866 Stock with Full datasetExperiment 4 for MA : DEMA the RMSE is: 12.10155181335347 Stock with Full datasetExperiment 4 for MA : DEMA the MAE is: 10.943210296434415 Stock with Full datasetExperiment 4 for MA : KAMA the MSE is: 19.64215945229788 Stock with Full datasetExperiment 4 for MA : KAMA the RMSE is: 4.4319475913302355 Stock with Full datasetExperiment 4 for MA : KAMA the MAE is: 3.5686191181651687 Stock with Full datasetExperiment 4 for MA : MIDPOINT the MSE is: 19.83404242536117 Stock with Full datasetExperiment 4 for MA : MIDPOINT the RMSE is: 4.453542682557468 Stock with Full datasetExperiment 4 for MA : MIDPOINT the MAE is: 3.5743844299716057 Stock with Full datasetExperiment 4 for MA : T3 the MSE is: 70.66866288490243 Stock with Full datasetExperiment 4 for MA : T3 the RMSE is: 8.406465540576637 Stock with Full datasetExperiment 4 for MA : T3 the MAE is: 6.802843731006552 Stock with Full datasetExperiment 4 for MA : TEMA the MSE is: 14.860699364166678 Stock with Full datasetExperiment 4 for MA : TEMA the RMSE is: 3.8549577642519868 Stock with Full datasetExperiment 4 for MA : TEMA the MAE is: 3.1502795604602833 Stock with Full datasetExperiment 5 for MA : SMA the MSE is: 34.39169744803393 Stock with Full datasetExperiment 5 for MA : SMA the RMSE is: 5.864443490053761 Stock with Full datasetExperiment 5 for MA : SMA the MAE is: 4.893666026892695 Stock with Full datasetExperiment 5 for MA : EMA the MSE is: 73.04930062485933 Stock with Full datasetExperiment 5 for MA : EMA the RMSE is: 8.546888359213506 Stock with Full datasetExperiment 5 for MA : EMA the MAE is: 6.613879572809731 Stock with Full datasetExperiment 5 for MA : WMA the MSE is: 70.35376938042184 Stock with Full datasetExperiment 5 for MA : WMA the RMSE is: 8.387715385039114 Stock with Full datasetExperiment 5 for MA : WMA the MAE is: 6.8547592718484545 Stock with Full datasetExperiment 5 for MA : DEMA the MSE is: 70.24761196199488 Stock with Full datasetExperiment 5 for MA : DEMA the RMSE is: 8.381384847505505 Stock with Full datasetExperiment 5 for MA : DEMA the MAE is: 6.862692730259403 Stock with Full datasetExperiment 5 for MA : KAMA the MSE is: 27.01407660930758 Stock with Full datasetExperiment 5 for MA : KAMA the RMSE is: 5.1975067685677505 Stock with Full datasetExperiment 5 for MA : KAMA the MAE is: 4.263533603346384 Stock with Full datasetExperiment 5 for MA : MIDPOINT the MSE is: 37.16076795716489 Stock with Full datasetExperiment 5 for MA : MIDPOINT the RMSE is: 6.095963250969029 Stock with Full datasetExperiment 5 for MA : MIDPOINT the MAE is: 5.0853544537748006 Stock with Full datasetExperiment 5 for MA : T3 the MSE is: 104.49209322707955 Stock with Full datasetExperiment 5 for MA : T3 the RMSE is: 10.222137409909903 Stock with Full datasetExperiment 5 for MA : T3 the MAE is: 7.958642954509092 Stock with Full datasetExperiment 5 for MA : TEMA the MSE is: 72.80283545670305 Stock with Full datasetExperiment 5 for MA : TEMA the RMSE is: 8.532457761788397 Stock with Full datasetExperiment 5 for MA : TEMA the MAE is: 7.653550657820228 Stock with Full datasetExperiment 6 for MA : SMA the MSE is: 75.03401716737034 Stock with Full datasetExperiment 6 for MA : SMA the RMSE is: 8.662217797271685 Stock with Full datasetExperiment 6 for MA : SMA the MAE is: 7.077228582293258 Stock with Full datasetExperiment 6 for MA : EMA the MSE is: 70.28436187942754 Stock with Full datasetExperiment 6 for MA : EMA the RMSE is: 8.383576914386099 Stock with Full datasetExperiment 6 for MA : EMA the MAE is: 6.876111393338704 Stock with Full datasetExperiment 6 for MA : WMA the MSE is: 70.57086226636761 Stock with Full datasetExperiment 6 for MA : WMA the RMSE is: 8.400646538592587 Stock with Full datasetExperiment 6 for MA : WMA the MAE is: 6.6664001460728475 Stock with Full datasetExperiment 6 for MA : DEMA the MSE is: 329.6035699397079 Stock with Full datasetExperiment 6 for MA : DEMA the RMSE is: 18.15498746735199 Stock with Full datasetExperiment 6 for MA : DEMA the MAE is: 16.799244301034683 Stock with Full datasetExperiment 6 for MA : KAMA the MSE is: 103.27437965196852 Stock with Full datasetExperiment 6 for MA : KAMA the RMSE is: 10.162400289890599 Stock with Full datasetExperiment 6 for MA : KAMA the MAE is: 8.510636158449836 Stock with Full datasetExperiment 6 for MA : MIDPOINT the MSE is: 97.31838139819504 Stock with Full datasetExperiment 6 for MA : MIDPOINT the RMSE is: 9.86500792692003 Stock with Full datasetExperiment 6 for MA : MIDPOINT the MAE is: 8.251875922025462 Stock with Full datasetExperiment 6 for MA : T3 the MSE is: 154.17386959926716 Stock with Full datasetExperiment 6 for MA : T3 the RMSE is: 12.41667707558134 Stock with Full datasetExperiment 6 for MA : T3 the MAE is: 10.12780101556255 Stock with Full datasetExperiment 6 for MA : TEMA the MSE is: 87.38626729945001 Stock with Full datasetExperiment 6 for MA : TEMA the RMSE is: 9.348062221629144 Stock with Full datasetExperiment 6 for MA : TEMA the MAE is: 8.358174186376312 Stock with Full datasetExperiment 7 for MA : SMA the MSE is: 44.65212926265077 Stock with Full datasetExperiment 7 for MA : SMA the RMSE is: 6.682224873696692 Stock with Full datasetExperiment 7 for MA : SMA the MAE is: 5.204686480071648 Stock with Full datasetExperiment 7 for MA : EMA the MSE is: 45.539825469272486 Stock with Full datasetExperiment 7 for MA : EMA the RMSE is: 6.748320196113436 Stock with Full datasetExperiment 7 for MA : EMA the MAE is: 5.43245952292463 Stock with Full datasetExperiment 7 for MA : WMA the MSE is: 42.30488040231578 Stock with Full datasetExperiment 7 for MA : WMA the RMSE is: 6.504220199402522 Stock with Full datasetExperiment 7 for MA : WMA the MAE is: 5.010195929360332 Stock with Full datasetExperiment 7 for MA : DEMA the MSE is: 23.305922116020078 Stock with Full datasetExperiment 7 for MA : DEMA the RMSE is: 4.827620751055335 Stock with Full datasetExperiment 7 for MA : DEMA the MAE is: 3.7452201197397774 Stock with Full datasetExperiment 7 for MA : KAMA the MSE is: 18.082341646298453 Stock with Full datasetExperiment 7 for MA : KAMA the RMSE is: 4.252333670621163 Stock with Full datasetExperiment 7 for MA : KAMA the MAE is: 3.4333194517527637 Stock with Full datasetExperiment 7 for MA : MIDPOINT the MSE is: 91.59813707600279 Stock with Full datasetExperiment 7 for MA : MIDPOINT the RMSE is: 9.57069156727991 Stock with Full datasetExperiment 7 for MA : MIDPOINT the MAE is: 7.718313236319782 Stock with Full datasetExperiment 7 for MA : T3 the MSE is: 145.19295971499469 Stock with Full datasetExperiment 7 for MA : T3 the RMSE is: 12.049604131049065 Stock with Full datasetExperiment 7 for MA : T3 the MAE is: 9.875885491811884 Stock with Full datasetExperiment 7 for MA : TEMA the MSE is: 41.1158513706741 Stock with Full datasetExperiment 7 for MA : TEMA the RMSE is: 6.412164328109044 Stock with Full datasetExperiment 7 for MA : TEMA the MAE is: 5.720374187090847 Stock with Full datasetExperiment 8 for MA : SMA the MSE is: 30.79397335917816 Stock with Full datasetExperiment 8 for MA : SMA the RMSE is: 5.549231780992587 Stock with Full datasetExperiment 8 for MA : SMA the MAE is: 4.345848876898189 Stock with Full datasetExperiment 8 for MA : EMA the MSE is: 32.37277762407691 Stock with Full datasetExperiment 8 for MA : EMA the RMSE is: 5.689708043834667 Stock with Full datasetExperiment 8 for MA : EMA the MAE is: 4.4297291061987245 Stock with Full datasetExperiment 8 for MA : WMA the MSE is: 58.067607397948805 Stock with Full datasetExperiment 8 for MA : WMA the RMSE is: 7.620210456276704 Stock with Full datasetExperiment 8 for MA : WMA the MAE is: 6.244282675104111 Stock with Full datasetExperiment 8 for MA : DEMA the MSE is: 166.4719121939062 Stock with Full datasetExperiment 8 for MA : DEMA the RMSE is: 12.902399474280209 Stock with Full datasetExperiment 8 for MA : DEMA the MAE is: 11.649540302125361 Stock with Full datasetExperiment 8 for MA : KAMA the MSE is: 17.81489427298047 Stock with Full datasetExperiment 8 for MA : KAMA the RMSE is: 4.220769393485087 Stock with Full datasetExperiment 8 for MA : KAMA the MAE is: 3.4008273908825086 Stock with Full datasetExperiment 8 for MA : MIDPOINT the MSE is: 18.523766068694844 Stock with Full datasetExperiment 8 for MA : MIDPOINT the RMSE is: 4.303924496165662 Stock with Full datasetExperiment 8 for MA : MIDPOINT the MAE is: 3.4879205441290337 Stock with Full datasetExperiment 8 for MA : T3 the MSE is: 51.75272426881254 Stock with Full datasetExperiment 8 for MA : T3 the RMSE is: 7.193936632248893 Stock with Full datasetExperiment 8 for MA : T3 the MAE is: 5.759673530885367 Stock with Full datasetExperiment 8 for MA : TEMA the MSE is: 28.424875173467463 Stock with Full datasetExperiment 8 for MA : TEMA the RMSE is: 5.331498398524327 Stock with Full datasetExperiment 8 for MA : TEMA the MAE is: 4.66633698560039
text = 'Stock with Full dataset '
simulations = [simulation1,simulation2,simulation3,simulation4,simulation5,simulation6,simulation7,simulation8]
for i,simulation in enumerate(simulations):
for ma in simulation.keys():
# print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
for ma in simulation.keys():
print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
for ma in simulation.keys():
# print(text+'Experiment ',i+1,' for MA :',ma,'the MSE is: ',simulation[ma]['final']['mse'])
# print(text+'Experiment ',i+1,' for MA :',ma,'the RMSE is: ',simulation[ma]['final']['rmse'])
print(text+'Experiment ',i+1,' for MA :',ma,'the MAE is: ',simulation[ma]['final']['mae'])
Stock with Full dataset Experiment 1 for MA : SMA the RMSE is: 5.432255854260365 Stock with Full dataset Experiment 1 for MA : EMA the RMSE is: 5.348202760406646 Stock with Full dataset Experiment 1 for MA : WMA the RMSE is: 8.9847639644493 Stock with Full dataset Experiment 1 for MA : DEMA the RMSE is: 8.654168316183721 Stock with Full dataset Experiment 1 for MA : KAMA the RMSE is: 4.87546037487117 Stock with Full dataset Experiment 1 for MA : MIDPOINT the RMSE is: 7.318272153983071 Stock with Full dataset Experiment 1 for MA : T3 the RMSE is: 6.640488797834165 Stock with Full dataset Experiment 1 for MA : TEMA the RMSE is: 3.0926372716336124 Stock with Full dataset Experiment 1 for MA : SMA the MSE is: 29.509403666146007 Stock with Full dataset Experiment 1 for MA : EMA the MSE is: 28.603272766421263 Stock with Full dataset Experiment 1 for MA : WMA the MSE is: 80.72598349686672 Stock with Full dataset Experiment 1 for MA : DEMA the MSE is: 74.8946292448382 Stock with Full dataset Experiment 1 for MA : KAMA the MSE is: 23.77011386693893 Stock with Full dataset Experiment 1 for MA : MIDPOINT the MSE is: 53.557107319764015 Stock with Full dataset Experiment 1 for MA : T3 the MSE is: 44.09609147416104 Stock with Full dataset Experiment 1 for MA : TEMA the MSE is: 9.564405293897392 Stock with Full dataset Experiment 1 for MA : SMA the MAE is: 4.5288133477558885 Stock with Full dataset Experiment 1 for MA : EMA the MAE is: 4.3952252144553965 Stock with Full dataset Experiment 1 for MA : WMA the MAE is: 7.266216353433966 Stock with Full dataset Experiment 1 for MA : DEMA the MAE is: 7.175854729849037 Stock with Full dataset Experiment 1 for MA : KAMA the MAE is: 3.900500517739451 Stock with Full dataset Experiment 1 for MA : MIDPOINT the MAE is: 6.3365268769325365 Stock with Full dataset Experiment 1 for MA : T3 the MAE is: 5.406095596816415 Stock with Full dataset Experiment 1 for MA : TEMA the MAE is: 2.44888799215368 Stock with Full dataset Experiment 2 for MA : SMA the RMSE is: 8.67185704341257 Stock with Full dataset Experiment 2 for MA : EMA the RMSE is: 7.862814198638052 Stock with Full dataset Experiment 2 for MA : WMA the RMSE is: 8.835353415190172 Stock with Full dataset Experiment 2 for MA : DEMA the RMSE is: 12.39330504554288 Stock with Full dataset Experiment 2 for MA : KAMA the RMSE is: 11.013147388994629 Stock with Full dataset Experiment 2 for MA : MIDPOINT the RMSE is: 10.492574800314088 Stock with Full dataset Experiment 2 for MA : T3 the RMSE is: 15.024615520410403 Stock with Full dataset Experiment 2 for MA : TEMA the RMSE is: 12.531951213144364 Stock with Full dataset Experiment 2 for MA : SMA the MSE is: 75.20110458138421 Stock with Full dataset Experiment 2 for MA : EMA the MSE is: 61.82384712230415 Stock with Full dataset Experiment 2 for MA : WMA the MSE is: 78.06346997131263 Stock with Full dataset Experiment 2 for MA : DEMA the MSE is: 153.59400995187858 Stock with Full dataset Experiment 2 for MA : KAMA the MSE is: 121.28941541171922 Stock with Full dataset Experiment 2 for MA : MIDPOINT the MSE is: 110.09412594018622 Stock with Full dataset Experiment 2 for MA : T3 the MSE is: 225.73907153615718 Stock with Full dataset Experiment 2 for MA : TEMA the MSE is: 157.04980120863047 Stock with Full dataset Experiment 2 for MA : SMA the MAE is: 7.0799160587584336 Stock with Full dataset Experiment 2 for MA : EMA the MAE is: 6.504666247736678 Stock with Full dataset Experiment 2 for MA : WMA the MAE is: 6.948265794170055 Stock with Full dataset Experiment 2 for MA : DEMA the MAE is: 11.203775482220726 Stock with Full dataset Experiment 2 for MA : KAMA the MAE is: 9.175643045864026 Stock with Full dataset Experiment 2 for MA : MIDPOINT the MAE is: 8.796456301428389 Stock with Full dataset Experiment 2 for MA : T3 the MAE is: 12.611725131734374 Stock with Full dataset Experiment 2 for MA : TEMA the MAE is: 11.294114614846999 Stock with Full dataset Experiment 3 for MA : SMA the RMSE is: 5.7206439706125165 Stock with Full dataset Experiment 3 for MA : EMA the RMSE is: 11.998010631691534 Stock with Full dataset Experiment 3 for MA : WMA the RMSE is: 4.958449798877449 Stock with Full dataset Experiment 3 for MA : DEMA the RMSE is: 14.3963453762824 Stock with Full dataset Experiment 3 for MA : KAMA the RMSE is: 4.872756371610364 Stock with Full dataset Experiment 3 for MA : MIDPOINT the RMSE is: 5.968619684258559 Stock with Full dataset Experiment 3 for MA : T3 the RMSE is: 10.185055542763655 Stock with Full dataset Experiment 3 for MA : TEMA the RMSE is: 6.316228397668807 Stock with Full dataset Experiment 3 for MA : SMA the MSE is: 32.725767438505336 Stock with Full dataset Experiment 3 for MA : EMA the MSE is: 143.9522591181831 Stock with Full dataset Experiment 3 for MA : WMA the MSE is: 24.586224407987817 Stock with Full dataset Experiment 3 for MA : DEMA the MSE is: 207.2547601932076 Stock with Full dataset Experiment 3 for MA : KAMA the MSE is: 23.743754657069395 Stock with Full dataset Experiment 3 for MA : MIDPOINT the MSE is: 35.62442093531873 Stock with Full dataset Experiment 3 for MA : T3 the MSE is: 103.73535640918065 Stock with Full dataset Experiment 3 for MA : TEMA the MSE is: 39.894741171517865 Stock with Full dataset Experiment 3 for MA : SMA the MAE is: 4.798603095387009 Stock with Full dataset Experiment 3 for MA : EMA the MAE is: 10.07848404711658 Stock with Full dataset Experiment 3 for MA : WMA the MAE is: 3.970226889097132 Stock with Full dataset Experiment 3 for MA : DEMA the MAE is: 12.894635987621164 Stock with Full dataset Experiment 3 for MA : KAMA the MAE is: 3.7850733762502107 Stock with Full dataset Experiment 3 for MA : MIDPOINT the MAE is: 5.0490603478808165 Stock with Full dataset Experiment 3 for MA : T3 the MAE is: 8.016244139827235 Stock with Full dataset Experiment 3 for MA : TEMA the MAE is: 5.481705479796751 Stock with Full dataset Experiment 4 for MA : SMA the RMSE is: 4.447102943159856 Stock with Full dataset Experiment 4 for MA : EMA the RMSE is: 5.623322106759368 Stock with Full dataset Experiment 4 for MA : WMA the RMSE is: 7.2439857551294375 Stock with Full dataset Experiment 4 for MA : DEMA the RMSE is: 12.10155181335347 Stock with Full dataset Experiment 4 for MA : KAMA the RMSE is: 4.4319475913302355 Stock with Full dataset Experiment 4 for MA : MIDPOINT the RMSE is: 4.453542682557468 Stock with Full dataset Experiment 4 for MA : T3 the RMSE is: 8.406465540576637 Stock with Full dataset Experiment 4 for MA : TEMA the RMSE is: 3.8549577642519868 Stock with Full dataset Experiment 4 for MA : SMA the MSE is: 19.776724587061057 Stock with Full dataset Experiment 4 for MA : EMA the MSE is: 31.621751516368622 Stock with Full dataset Experiment 4 for MA : WMA the MSE is: 52.4753296205182 Stock with Full dataset Experiment 4 for MA : DEMA the MSE is: 146.44755629127866 Stock with Full dataset Experiment 4 for MA : KAMA the MSE is: 19.64215945229788 Stock with Full dataset Experiment 4 for MA : MIDPOINT the MSE is: 19.83404242536117 Stock with Full dataset Experiment 4 for MA : T3 the MSE is: 70.66866288490243 Stock with Full dataset Experiment 4 for MA : TEMA the MSE is: 14.860699364166678 Stock with Full dataset Experiment 4 for MA : SMA the MAE is: 3.587879520041786 Stock with Full dataset Experiment 4 for MA : EMA the MAE is: 4.355106062590965 Stock with Full dataset Experiment 4 for MA : WMA the MAE is: 5.852253139584933 Stock with Full dataset Experiment 4 for MA : DEMA the MAE is: 10.943210296434415 Stock with Full dataset Experiment 4 for MA : KAMA the MAE is: 3.5686191181651687 Stock with Full dataset Experiment 4 for MA : MIDPOINT the MAE is: 3.5743844299716057 Stock with Full dataset Experiment 4 for MA : T3 the MAE is: 6.802843731006552 Stock with Full dataset Experiment 4 for MA : TEMA the MAE is: 3.1502795604602833 Stock with Full dataset Experiment 5 for MA : SMA the RMSE is: 5.864443490053761 Stock with Full dataset Experiment 5 for MA : EMA the RMSE is: 8.546888359213506 Stock with Full dataset Experiment 5 for MA : WMA the RMSE is: 8.387715385039114 Stock with Full dataset Experiment 5 for MA : DEMA the RMSE is: 8.381384847505505 Stock with Full dataset Experiment 5 for MA : KAMA the RMSE is: 5.1975067685677505 Stock with Full dataset Experiment 5 for MA : MIDPOINT the RMSE is: 6.095963250969029 Stock with Full dataset Experiment 5 for MA : T3 the RMSE is: 10.222137409909903 Stock with Full dataset Experiment 5 for MA : TEMA the RMSE is: 8.532457761788397 Stock with Full dataset Experiment 5 for MA : SMA the MSE is: 34.39169744803393 Stock with Full dataset Experiment 5 for MA : EMA the MSE is: 73.04930062485933 Stock with Full dataset Experiment 5 for MA : WMA the MSE is: 70.35376938042184 Stock with Full dataset Experiment 5 for MA : DEMA the MSE is: 70.24761196199488 Stock with Full dataset Experiment 5 for MA : KAMA the MSE is: 27.01407660930758 Stock with Full dataset Experiment 5 for MA : MIDPOINT the MSE is: 37.16076795716489 Stock with Full dataset Experiment 5 for MA : T3 the MSE is: 104.49209322707955 Stock with Full dataset Experiment 5 for MA : TEMA the MSE is: 72.80283545670305 Stock with Full dataset Experiment 5 for MA : SMA the MAE is: 4.893666026892695 Stock with Full dataset Experiment 5 for MA : EMA the MAE is: 6.613879572809731 Stock with Full dataset Experiment 5 for MA : WMA the MAE is: 6.8547592718484545 Stock with Full dataset Experiment 5 for MA : DEMA the MAE is: 6.862692730259403 Stock with Full dataset Experiment 5 for MA : KAMA the MAE is: 4.263533603346384 Stock with Full dataset Experiment 5 for MA : MIDPOINT the MAE is: 5.0853544537748006 Stock with Full dataset Experiment 5 for MA : T3 the MAE is: 7.958642954509092 Stock with Full dataset Experiment 5 for MA : TEMA the MAE is: 7.653550657820228 Stock with Full dataset Experiment 6 for MA : SMA the RMSE is: 8.662217797271685 Stock with Full dataset Experiment 6 for MA : EMA the RMSE is: 8.383576914386099 Stock with Full dataset Experiment 6 for MA : WMA the RMSE is: 8.400646538592587 Stock with Full dataset Experiment 6 for MA : DEMA the RMSE is: 18.15498746735199 Stock with Full dataset Experiment 6 for MA : KAMA the RMSE is: 10.162400289890599 Stock with Full dataset Experiment 6 for MA : MIDPOINT the RMSE is: 9.86500792692003 Stock with Full dataset Experiment 6 for MA : T3 the RMSE is: 12.41667707558134 Stock with Full dataset Experiment 6 for MA : TEMA the RMSE is: 9.348062221629144 Stock with Full dataset Experiment 6 for MA : SMA the MSE is: 75.03401716737034 Stock with Full dataset Experiment 6 for MA : EMA the MSE is: 70.28436187942754 Stock with Full dataset Experiment 6 for MA : WMA the MSE is: 70.57086226636761 Stock with Full dataset Experiment 6 for MA : DEMA the MSE is: 329.6035699397079 Stock with Full dataset Experiment 6 for MA : KAMA the MSE is: 103.27437965196852 Stock with Full dataset Experiment 6 for MA : MIDPOINT the MSE is: 97.31838139819504 Stock with Full dataset Experiment 6 for MA : T3 the MSE is: 154.17386959926716 Stock with Full dataset Experiment 6 for MA : TEMA the MSE is: 87.38626729945001 Stock with Full dataset Experiment 6 for MA : SMA the MAE is: 7.077228582293258 Stock with Full dataset Experiment 6 for MA : EMA the MAE is: 6.876111393338704 Stock with Full dataset Experiment 6 for MA : WMA the MAE is: 6.6664001460728475 Stock with Full dataset Experiment 6 for MA : DEMA the MAE is: 16.799244301034683 Stock with Full dataset Experiment 6 for MA : KAMA the MAE is: 8.510636158449836 Stock with Full dataset Experiment 6 for MA : MIDPOINT the MAE is: 8.251875922025462 Stock with Full dataset Experiment 6 for MA : T3 the MAE is: 10.12780101556255 Stock with Full dataset Experiment 6 for MA : TEMA the MAE is: 8.358174186376312 Stock with Full dataset Experiment 7 for MA : SMA the RMSE is: 6.682224873696692 Stock with Full dataset Experiment 7 for MA : EMA the RMSE is: 6.748320196113436 Stock with Full dataset Experiment 7 for MA : WMA the RMSE is: 6.504220199402522 Stock with Full dataset Experiment 7 for MA : DEMA the RMSE is: 4.827620751055335 Stock with Full dataset Experiment 7 for MA : KAMA the RMSE is: 4.252333670621163 Stock with Full dataset Experiment 7 for MA : MIDPOINT the RMSE is: 9.57069156727991 Stock with Full dataset Experiment 7 for MA : T3 the RMSE is: 12.049604131049065 Stock with Full dataset Experiment 7 for MA : TEMA the RMSE is: 6.412164328109044 Stock with Full dataset Experiment 7 for MA : SMA the MSE is: 44.65212926265077 Stock with Full dataset Experiment 7 for MA : EMA the MSE is: 45.539825469272486 Stock with Full dataset Experiment 7 for MA : WMA the MSE is: 42.30488040231578 Stock with Full dataset Experiment 7 for MA : DEMA the MSE is: 23.305922116020078 Stock with Full dataset Experiment 7 for MA : KAMA the MSE is: 18.082341646298453 Stock with Full dataset Experiment 7 for MA : MIDPOINT the MSE is: 91.59813707600279 Stock with Full dataset Experiment 7 for MA : T3 the MSE is: 145.19295971499469 Stock with Full dataset Experiment 7 for MA : TEMA the MSE is: 41.1158513706741 Stock with Full dataset Experiment 7 for MA : SMA the MAE is: 5.204686480071648 Stock with Full dataset Experiment 7 for MA : EMA the MAE is: 5.43245952292463 Stock with Full dataset Experiment 7 for MA : WMA the MAE is: 5.010195929360332 Stock with Full dataset Experiment 7 for MA : DEMA the MAE is: 3.7452201197397774 Stock with Full dataset Experiment 7 for MA : KAMA the MAE is: 3.4333194517527637 Stock with Full dataset Experiment 7 for MA : MIDPOINT the MAE is: 7.718313236319782 Stock with Full dataset Experiment 7 for MA : T3 the MAE is: 9.875885491811884 Stock with Full dataset Experiment 7 for MA : TEMA the MAE is: 5.720374187090847 Stock with Full dataset Experiment 8 for MA : SMA the RMSE is: 5.549231780992587 Stock with Full dataset Experiment 8 for MA : EMA the RMSE is: 5.689708043834667 Stock with Full dataset Experiment 8 for MA : WMA the RMSE is: 7.620210456276704 Stock with Full dataset Experiment 8 for MA : DEMA the RMSE is: 12.902399474280209 Stock with Full dataset Experiment 8 for MA : KAMA the RMSE is: 4.220769393485087 Stock with Full dataset Experiment 8 for MA : MIDPOINT the RMSE is: 4.303924496165662 Stock with Full dataset Experiment 8 for MA : T3 the RMSE is: 7.193936632248893 Stock with Full dataset Experiment 8 for MA : TEMA the RMSE is: 5.331498398524327 Stock with Full dataset Experiment 8 for MA : SMA the MSE is: 30.79397335917816 Stock with Full dataset Experiment 8 for MA : EMA the MSE is: 32.37277762407691 Stock with Full dataset Experiment 8 for MA : WMA the MSE is: 58.067607397948805 Stock with Full dataset Experiment 8 for MA : DEMA the MSE is: 166.4719121939062 Stock with Full dataset Experiment 8 for MA : KAMA the MSE is: 17.81489427298047 Stock with Full dataset Experiment 8 for MA : MIDPOINT the MSE is: 18.523766068694844 Stock with Full dataset Experiment 8 for MA : T3 the MSE is: 51.75272426881254 Stock with Full dataset Experiment 8 for MA : TEMA the MSE is: 28.424875173467463 Stock with Full dataset Experiment 8 for MA : SMA the MAE is: 4.345848876898189 Stock with Full dataset Experiment 8 for MA : EMA the MAE is: 4.4297291061987245 Stock with Full dataset Experiment 8 for MA : WMA the MAE is: 6.244282675104111 Stock with Full dataset Experiment 8 for MA : DEMA the MAE is: 11.649540302125361 Stock with Full dataset Experiment 8 for MA : KAMA the MAE is: 3.4008273908825086 Stock with Full dataset Experiment 8 for MA : MIDPOINT the MAE is: 3.4879205441290337 Stock with Full dataset Experiment 8 for MA : T3 the MAE is: 5.759673530885367 Stock with Full dataset Experiment 8 for MA : TEMA the MAE is: 4.66633698560039
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
cd drive/MyDrive/Stock price prediction/Archana - LSTM Hybrid
/content/drive/.shortcut-targets-by-id/1IaGjVBlTspxI2CHSrxfYnaiYvsaG0pHs/Stock price prediction/Archana - LSTM Hybrid
%%shell
jupyter nbconvert --to html